Readit News logoReadit News
gruez · 24 days ago
The blog post has a bunch of charts, which gives it a veneer of objectivity and rigor, but in reality it's just all vibes and conjecture. Meanwhile recent empirical studies actually point in the opposite direction, showing that AI use increases inequality, not decrease it.

https://www.economist.com/content-assets/images/20250215_FNC...

https://www.economist.com/finance-and-economics/2025/02/13/h...

bane · 24 days ago
Of course AI increases inequality. It's automated ladder pulling technology.

To become good at something you have to work through the lower rungs and acquire skill. AI does all those lower level jobs, puts the people who need those jobs for experience on the street, and robs us of future experts.

The people who benefit the most are those who are already up on top of the ladder investing billions to make the ladder raise faster and faster.

EthanHeilman · 23 days ago
AI has been extremely useful at teaching me things. Granted I needed to already know how to learn and work through the math myself, but when I get stuck it is more helpful than any other resource on the internet.

> To become good at something you have to work through the lower rungs and acquire skill. AI does all those lower level jobs, puts the people who need those jobs for experience on the street, and robs us of future experts.

You can still do that with AI, you give yourself assignments and then use the AI as a resource when you get stuck. As you get better you ask the AI less and less. The fact that the AI is wrong sometimes is like test that allows you to evaluate if you are internalizing the skills or just trusting the AI.

If we ever have AIs which don't hallucinate, I'd want that added back in as a feature.

_carbyau_ · 24 days ago
Whether ladder raising is benefitting people now or later or by how much - I don't know.

But I share your concerns that:

AI doing the lesser tasks of [whatever] ->

less(no?) humans will do those tasks ->

less(no?) experienced humans to further the state of the art ->

automation-but-stagnation.

But tragedy of the commons says I have to teach my kid to use AI!

rnaarten · 23 days ago
When you have an unfair system, every technology advancement will benefit the few more than the many.

So off course AI falls into this realm.

musicale · 24 days ago
It's the trajectory of automation for the past few decades. Automate many jobs out of existence, and add a much smaller set of higher-skill jobs.
tom_m · 22 days ago
Definitely. I think it's worse than that too. I have a feeling it's going to expose some people higher up that ladder who really shouldn't be. So it won't just be junior people who struggle but also "senior" people as well. I think that only deepens the inequality.
charcircuit · 24 days ago
AI can teach you the lower rungs more effectively than what existed before.
devonbleak · 24 days ago
Yeah, the graphs make some really big assumptions that don't seem to be backed up anywhere except AI maximalist head canon.

There's also a gap in addressing vibe coded "side projects" that get deployed online as a business. Is the code base super large and complex? No. Is AI capable of taking input from a novice and making something "good enough" in this space? Also no.

skhameneh · 24 days ago
The later remarks are very strong assumptions underestimating the power AI tools offer.

AI tools are great at unblocking and helping their users explore beyond their own understanding. The tokens in are limited to the users' comprehension, but the tokens out are generated from a vast collection of greater comprehension.

For the novice, it's great at unblocking and expanding capabilities. "Good enough" results from novices are tangible. There is no doubt the volume of "good enough" is perceived as very low by many.

For large and complex codebases, unfortunately the effects of tech debt (read: objectively subpar practices) translate into context rot at development time. A properly architected and documented codebase that adheres to common well structured patterns can easily be broken down into small easily digestible contexts. i.e. a fragmented codebase does not scale well with LLMs, because the fragmentation is seeding the context for the model. The model reflects and acts as an amplifier to what it's fed.

Lerc · 24 days ago
In a sense I agree. I don't necessarily think that it has to be the case, but I got that same feeling of that it was wearing a white lab coat to be a scientist. I think their honest attempt was to express the relationship of how they perceive things.

I think this could still be used as a valuable form of communication if you can clearly express the idea that this is representing a hypothesis rather than a measurement. The simplest would be to label the graphs as "hypothesis". but a subtle but easily identifiable visual change might be better.

Wavy lines for the axis spring to mind as an idea to express that. I would worry about the ability to express hypotheses about definitive events that happen when a value crosses an axis though, You'd probably want a straight line for that. Perhaps it would be sufficient to just have wavy lines at the ends of the axes beyond the point at which the plot appears.

Beyond that. I think the article presumes the flattening of the curve as mastery is achieved. I'm not sure that's a given, perhaps it seems that way because we evaluate proportional improvement, implicitly placing skill on a logarithmic scale.

I'd still consider the post from the author as being done in better faith than the economist links.

Id like to know what people think, and for them to say that honestly. If they have hard data, they show it and how it confirms their hypothesis. At the other end of the scale is gathering data and only exposing the measurements that imply a hypothesis that you are not brave enough to state explicitly.

Calavar · 24 days ago
The graphic has four studies that show increased inequality and six that show reduced inequality.
tripletao · 24 days ago
> The graphic has four studies that show increased inequality

Three, since Toner-Rodgers 2024 currently seems to be a total fabrication.

https://archive.is/Ql1lQ

gruez · 24 days ago
Read my comment again. keyword here is "recent". The second link also expands on why it's relevant. It's best to read the whole article, but here's a paragraph that captures the argument:

>The shift in recent economic research supports his observation. Although early studies suggested that lower performers could benefit simply by copying AI outputs, newer studies look at more complex tasks, such as scientific research, running a business and investing money. In these contexts, high performers benefit far more than their lower-performing peers. In some cases, less productive workers see no improvement, or even lose ground.

bgwalter · 24 days ago
Thanks for the links. That should be obvious to anyone who believes that $70 billion datacenters (Meta) are needed and the investment will be amortized by subscriptions (in the case of Meta also by enhanced user surveillance).

The means of production are in a small oligopoly, the rest will be redundant or exploitable sharecroppers.

(All this under the assumption that "AI" works, which its proponents affirm in public at least.)

Syzygies · 24 days ago
Yup. As a retired mathematician who craves the productivity of an obsessed 28 year old, I've been all in on AI in 2025. I'm now on Claude's $200/month Max plan in order to use Claude Code Opus 4 without restraint. I still hit limits, usually when I run parallel sessions to review a 57 file legacy code base.

For a time I refused to talk with anybody or read anything about AI, because it was all noise that didn't match my hard-earned experience. Recently HN has included some fascinating takes. This isn't one.

I have the opinion that neurodivergents are more successful using AI. This is so easily dismissed as hollow blather, but I have a precise theory backing this opinion.

AI is a giant association engine. Linear encoding (the "King - Man + Woman = Queen" thing) is linear algebra. I taught linear algebra for decades.

As I explained to my optometrist today, if you're trying to balance a plate (define a hyperplane) with three fingers, it works better if your fingers are farther apart.

My whole life people have rolled their eyes when I categorize a situation using analogies that are too far flung for their tolerances.

Now I spend most of my time coding with AI, and it responds very well to my "fingers farther apart" far reaching analogies for what I'm trying to focus on. It's an association engine based on linear algebra, and I have an astounding knack for describing subspaces.

AI is raising the ceiling, not the floor.

FranzFerdiNaN · 24 days ago
> Now I spend most of my time coding with AI, and it responds very well to my "fingers farther apart" far reaching analogies for what I'm trying to focus on.

If you made analogies based on Warhammer 40k or species of mosquitoes it would have reacted exactly the same.

__mharrison__ · 24 days ago
Can you explain your finger analogy a little more? What do the fingers represent?
trod1234 · 22 days ago
I'm honestly tired of all the misinformation about AI being posted.

You are correct. Its not hard to see why, (AI imposes cost interference), but there are a lot of bots that keep promoting slop, and moderation doesn't seem to be doing anything about it.

I'm tired of seeing a significant percentage of the article posts in the top 300 being slop.

Dead Comment

throwmeaway222 · 24 days ago
> inequality

It's free for everyone with a phone or a laptop.

stillpointlab · 24 days ago
This mirrors insights from Andrew Ng's recent AI startup talk [1].

I recall he mentions in this video that the new advice they are giving to founders is to throw away prototypes when they pivot instead of building onto a core foundation. This is because of the effects described in the article.

He also gives some provisional numbers (see the section "Rapid Prototyping and Engineering" and slides ~10:30) where he suggests prototype development sees a 10x boost compared to a 30-50% improvement for existing production codebases.

This feels vaguely analogous to the switch from "pets" to "livestock" when the industry switched from VMs to containers. Except, the new view is that your codebase is more like livestock and less like a pet. If true (and no doubt this will be a contentious topic to programmers who are excellent "pet" owners) then there may be some advantage in this new coding agent world to getting in on the ground floor and adopting practices that make LLMs productive.

1. https://www.youtube.com/watch?v=RNJCfif1dPY

falcor84 · 24 days ago
Great point, but just mentioning (nitpicking?) that I never heard about machines/containers referred to as "livestock", but rather in my milieu it's always "pets" vs "cattle". I now wonder if it's a geographical thing.
bayindirh · 24 days ago
Yeah, the CERN talk* [0] coined the term Pets vs. Cattle analogy, and it was way before VMs were cheap on bare metal. I think the word just evolved as the idea got rooted in the community.

We use the same analogy for the last 20 years or so. Provisioning 150 cattle servers take 15 minutes or so, and we can provision a pet in a couple of hours, at most.

[0]: https://www.engineyard.com/blog/pets-vs-cattle/

*: Engine Yard post notes that Microsoft's Bill Baker used the term earlier, though CERN's date (2012) checks out with our effort timeline and how we got started.

HPsquared · 24 days ago
Boxen? (Oxen)
skmurphy · 24 days ago
Thanks for pointing this out. I think this is an insightful analogy. We will likely manage generated code in the same way we manage large cloud computing complexes.

This probably does not apply to legacy code that has been in use for several years where the production deployment gives you a higher level of confidence (and a higher risk of regression errors with changes).

Have you blogged about your insights, the https://stillpointlab.com site is very sparse as is @stillpointlab

stillpointlab · 24 days ago
I'm currently in build mode. In some sense, my project is the most over complicated blog engine in the history of personal blog engines. I'm literally working on integrating a markdown editor to the project.

Once I have the MVP working, I will be working on publishing as a means to dogfood the tool. So, check back soon!

eikenberry · 23 days ago
IMO the problem with this pets vs. livestock analogy is that it focuses on the code when the value is really in the writers head. Their understanding and mental model of the code is what matters. AI tools can help with managing the code, helping the writer build their models and express their thoughts, but it has zero impact on where the true value is located.
lubujackson · 24 days ago
Oo, the "pets vs. livestock" analogy really works better than the "craftsmen vs. slop-slinger" arguments.

Because using an LLM doesn't mean you devalue well-crafted or understandable results. But it does indicate a significant shift in how you view the code itself. It is more about the emotional attachment to code vs. code as a means to an end.

recursive · 24 days ago
I don't think it's exactly emotional attachment. It's the likelihood that I'm going to get an escalated support ticket caused by this particular piece of slop/artisanally-crafted functionality.
LeftHandPath · 24 days ago
There are some things that you still can't do with LLMs. For example, if you tried to learn chess by having the LLM play against you, you'd quickly find that it isn't able to track a series of moves for very long (usually 5-10 turns; the longest I've seen it last was 18) before it starts making illegal choices. It also generally accepts invalid moves from your side, so you'll never be corrected if you're wrong about how to use a certain piece.

Because it can't actually model these complex problems, it really requires awareness from the user regarding what questions should and shouldn't be asked. An LLM can probably tell you how a knight moves, or how to respond to the London System. It probably can't play a full game of chess with you, and will virtually never be able to advise you on the best move given the state of the board. It probably can give you information about big companies that are well-covered in its training data. It probably can't give you good information about most sub-$1b public companies. But, if you ask, it will give a confident answer.

They're a minefield for most people and use cases, because people aren't aware of how wrong they can be, and the errors take effort and knowledge to notice. It's like walking on a glacier and hoping your next step doesn't plunge through the snow and into a deep, hidden crevasse.

og_kalu · 24 days ago
LLMs playing chess isn't a big deal. You can train a model on chess games and it will play at a decent ELO and very rarely make illegal moves(i.e 99.8% legal move rate). There are a few such models around. I think post training messes with chess ability and Open ai et al just don't really care about that. But LLMs can play chess just fine.

[0] https://arxiv.org/pdf/2403.15498v2

[1] https://github.com/adamkarvonen/chess_gpt_eval

LeftHandPath · 24 days ago
Jeez, that arxiv paper invalidates my assumption that it can't model the game. Great read. Thank you for sharing.

Insane that the model actually does seem to internalize a representation of the state of the board -- rather than just hitting training data with similar move sequences.

...Makes me wish I could get back into a research lab. Been a while since I've stuck to reading a whole paper out of legitimate interest.

(Edit) At the same time, it's still worth noting the accuracy errors and the potential for illegal moves. That's still enough to prevent LLMs from being applied to problem domains with severe consequences, like banking, security, medicine, law, etc.

smiley1437 · 24 days ago
> people aren't aware of how wrong they can be, and the errors take effort and knowledge to notice.

I have friends who are highly educated professionals (PhDs, MDs) who just assume that AI\LLMs make no mistakes.

They were shocked that it's possible for hallucinations to occur. I wonder if there's a halo effect where the perfect grammar, structure, and confidence of LLM output causes some users to assume expertise?

bayindirh · 24 days ago
Computers are always touted as deterministic machines. You can't argue with a compiler, or Excel's formula editor.

AI, in all its glory, is seen as an extension of that. A deterministic thing which is meticulously crafted to provide an undisputed truth, and it can't make mistakes because computers are deterministic machines.

The idea of LLMs being networks with weights plus some randomness is both a vague and too complicated abstraction for most people. Also, companies tend to say this part very quietly, so when people read the fine print, they get shocked.

viccis · 24 days ago
> I wonder if there's a halo effect where the perfect grammar, structure, and confidence of LLM output causes some users to assume expertise?

I think it's just that LLMs are modeling generative probability distributions of sequences of tokens so well that what they actually are nearly infallible at is producing convincing results. Often times the correct result is the most convincing, but other times what seems most convincing to an LLM just happens to also be most convincing to a human regardless of correctness.

throwawayoldie · 24 days ago
My experience, speaking over a scale of decades, is that most people, even very smart and well-educated ones, don't know a damn thing about how computers work and aren't interested in learning. What we're seeing now is just one unfortunate consequence of that.

(To be fair, in many cases, I'm not terribly interested in learning the details of their field.)

yifanl · 24 days ago
If I wasn't familiar with the latest in computer tech, I would also assume LLMs never make mistakes, after hearing such excited praise for them over the last 3 years.
emporas · 24 days ago
It is only in the last century or so, that statistical methods were invented and applied. It is possible for many people to be very competent at what they are doing and at the same time be totally ignorant of statistics.

There are lies, statistics and goddamn hallucinations.

rplnt · 24 days ago
Have they never used it? Majority of the responses that I can verify are wrong. Sometimes outright nonse, sometimes believable. Be it general knowledge or something where deeper expertise is required.
jasonjayr · 24 days ago
I worry that the way the models "Speak" to users, will cause users to drop their 'filters' about what to trust and not trust.

We are barely talking modern media literacy, and now we have machines that talk like 'trusted' face to face humans, and can be "tuned" to suggest specific products or use any specific tone the owner/operator of the system wants.

dsjoerg · 24 days ago
> I have friends who are highly educated professionals (PhDs, MDs) who just assume that AI\LLMs make no mistakes.

Highly educated professionals in my experience are often very bad at applied epistemology -- they have no idea what they do and don't know.

physicsguy · 24 days ago
It's super obvious even if you try and use something like agent mode for coding, it starts off well but drifts off more and more. I've even had it try and do totally irrelevant things like indent some code using various Claude models.
poszlem · 24 days ago
My favourite example is something that happens quite often even with Opus, where I ask it to change a piece of code, and it does. Then I ask it to write a test for that code, it dutifully writes one. Next, I tell it to run the test, and of course, the test fails. I ask it to fix the test, it tries, but the test fails again. We repeat this dance a couple of times, and then it seemingly forgets the original request entirely. It decides, "Oh, this test is failing because of that new code you added earlier. Let me fix that by removing the new code." Naturally, now the functionality is gone, so it confidently concludes, "Hey, since that feature isn't there anymore, let me remove the test too!"
DougBTX · 24 days ago
Yeah, the chess example is interesting. The best specialised AIs for chess are all clearly better than humans, but our best general AIs are barely able to play legal moves. The ceiling for AI is clearly much higher than current LLMs.
pharrington · 24 days ago
Large Language Models aren't general AIs. Its in the name.
nomel · 24 days ago
> you'd quickly find that it isn't able to track a series of moves for very long (usually 5-10 turns; the longest I've seen it last was 18)

In chess, previous moves are irrelevant, and LLM aren't good with filtering out irrelevant data [1]. For better performance, you should include only the relevant data in the context window: the current state of then board.

[1] https://news.ycombinator.com/item?id=44724238

manmal · 24 days ago
Since agents are good only at greenfield projects, the logical conclusion is that existing codebases have to be prepared such that new features are (opinionated) greenfield projects - let all the wiring dangle out of the wall so the intern just has to plug in the appliance. All the rest has to be done by humans, or the intern will rip open the wall to hang a picture.
PaulHoule · 24 days ago
Hogwash. If you can't figure out how to do something with project Y from npm try checking it out from Github with WebStorm and asking Junie how to do it -- often you get a good answer right away. If not you can ask questions that can help you understand the code base. Don't understand some data structure which is a maze of Map<String, Objects>(s) it will scan how it is used and give you draft documentation.

Sure you can't point it to a Jira ticket and get a PR but you certainly can use it as a pair programmer. I wouldn't say it is much faster than working alone but I end up writing more tests and arguing with it over error handling means I do a better job in the end.

falcor84 · 24 days ago
> Sure you can't point it to a Jira ticket and get a PR

You absolutely can. This is exactly what SWE-Bench[0] measures, and I've been amazed at how quickly AIs have been climbing those ladders. I personally have been using Warp [1] a lot recently and in quite a lot of low-medium difficulty cases it can one-shot a decent PR. For most of my work I still find that I need to pair with it to get sufficiently good results (and that's why I still prefer it to something cloud-based like Codex [2], but otherwise it's quite good too), and I expect the situation to flip over the coming couple of years.

[0] https://www.swebench.com/

[1] https://www.warp.dev/

[2] https://openai.com/index/introducing-codex/

manmal · 24 days ago
What you describe is not using agents at all, which my comment was aimed at if you read the first sentence again.
yoz-y · 24 days ago
They’re not. They’re good at many things and bad at many things. The more I use them the more I’m confused about which is which.
manmal · 24 days ago
They are called slot machines for a reason.
spion · 24 days ago
I think agents have a curve where they're kinda bad at bootstrapping a project, very good if used in a small-to-medium-sized existing project and then it goes downhill from there as size increases, slowly.

Something about a brand-new project often makes LLMs drop to "example grade" code, the kind you'd never put in production. (An example: claude implemented per-task file logging in my prototype project by pushing to an array of log lines, serializing the entire thing to JSON and rewriting the entire file, for every logged event)

amelius · 24 days ago
AI is an interpolator, not an extrapolator.
canadaduane · 24 days ago
Very concise, thank you for sharing this insight.
throe23486 · 24 days ago
I read this as interloper. What's an extraloper?
shagie · 24 days ago
An interloper being someone who intrudes or meddles in a situation (inter "between or amid) + loper "to leap or run" - https://en.wiktionary.org/wiki/loper ), an extraloper would be someone who dances or leaps around the outside of a subject or meeting with similar annoyances.
exasperaited · 24 days ago
Opposite of "inter-" is "intra-".

Intraloper, weirdly enough, is a word in use.

falcor84 · 24 days ago
I agree with most of TFA but not this:

> This means cheaters will plateau at whatever level the AI can provide

From my experience, the skill of using AI effectively is of treating the AI with a "growth mindset" rather than a "fixed" one. What I do is that I roleplay as the AI's manager, giving it a task, and as long as I know enough to tell whether its output is "good enough", I can lend it some of my metagcognition via prompting to get it to continue working through obstacles until I'm happy with the result.

There are diminishing returns of course, but I found that I can get significantly better quality output than what it gave me initially without having to learn the "how" of the skill myself (i.e. I'm still "cheating"), and only focusing my learning on the boundary of what is hard about the task. By doing this, I feel that over time I become a better manager in that domain, without having to spend the amount of effort to become a practitioner myself.

righthand · 24 days ago
How do you know it’s significantly better quality if you don’t know any of the “how”? The quality increase seems relative to the garbage you start with. I guess as long as you impress yourself with the result it doesn’t matter if it’s not actually higher quality.
razzmatazmania · 24 days ago
I don't think "quality" has anything like a universal definition, and when people say that they probably mean an alignment with personal taste.

Does it solve the problem? As long as it isn't prohibitively costly in terms of time or resources, then the rest is really just taste. As a user I have no interest whatsoever if your code is "idiomatic" or "modular" or "functional". In other industries "quality" usually means free of defects, but software is unique in that we just expect products to be defective. Your surgeon operates on the wrong knee? The board could revoke the license, and you are getting a windfall settlement. A bridge design fails? Someone is getting sued or even prosecuted. SharePoint gets breached? Well, that's just one of those things, I guess. I'm not really bothered that AI is peeing in the pool that has been a sewer as long as I can remember. At least the AI doesn't bill at an attorney's rate to write a mess that barely works.

tailspin2019 · 24 days ago
I wouldn’t classify what you’re doing as “cheating”!
andrenotgiant · 24 days ago
This tracks for other areas of AI I am more familiar with.

Below average people can use AI to get average results.

pcrh · 24 days ago
This is in line with another quip about AI: You need to know more than the LLM in order to gain any benefit from it.
hirvi74 · 24 days ago
I am not certain that is entirely true.

I suppose it's all a matter of what one is using an LLM for, no?

GPT is great at citing sources for most of my requests -- even if not always prompted to do so. So, in a way, I kind of use LLMs as a search engine/Wikipedia hybrid (used to follow links on Wiki a lot too). I ask it what I want, ask for sources if none are provided, and just follow the sources to verify information. I just prefer the natural language interface over search engines. Plus, results are not cluttered with SEO ads and clickbait rubbish.

dvsfish · 24 days ago
Hmm I don't feel like this should be taken as a tenet of AI. I feel a more relevant kernel would be less black and white.

Also I think what you're saying is a direct contradiction of the parent. Below average people can now get average results; in other words: The LLM will boost your capabilities (at least if you're already 'less' capable than average). This is a huge benefit if you are in that camp.

But for other cases too, all you need to know is where your knowledge ends, and that you can't just blindly accept what the AI responds with. In fact, I find LLMs are often most useful precisely when you don’t know the answer. When you’re trying to fill in conceptual gaps and explore an idea.

Even say during code generation, where you might not fully grasp what’s produced, you can treat the model like pair programming and ask it follow-up questions and dig into what each part does. They're very good at converting "nebulous concept description" into "legitimate standard keyword" so that you can go and find out about said concept that you're unfamiliar with.

Realistically the only time I feel I know more than the LLM is when I am working on something that I am explicitly an expert in, and in which case often find that LLMs provide nuance lacking suggestions that don’t always add much. It takes a lot more filling in context in these situations for it to be beneficial (but still can be).

Take a random example of nifty bit of engineering: The powerline ethernet adapter. A curious person might encounter these and wonder how they work. I don't believe an understanding of this technology is very obvious to a layman. Start asking questions and you very quickly come to understand how it embeds bits in the very same signal that transmits power through your house without any interference between the two "types" of signal. It adds data to high frequencies on one end, and filters out the regular power transmitting frequencies at the other end so that the signal can be converted back into bits for use in the ethernet cable (for a super brief summary). But if want to really drill into each and every engineering concept, all I need to do is continue the conversation.

I personally find this loop to be unlike anything I've experienced as far as getting immediate access to an understanding and supplementary material for the exact thing Im wondering about.

jononor · 24 days ago
Above average people can also use it to get average results. Which can actually be useful. For many tasks and usecases, the good enough threshold can actually be quite low.
itsoktocry · 24 days ago
That explains why people here are against it, because everyone is above average I guess.
falcor84 · 24 days ago
I'm not against it. I wonder where in the distribution it puts me.
djeastm · 24 days ago
>Below average people can use AI to get average results.

But that would shift the average up.

fellowniusmonk · 24 days ago
The greatest use of LLMs is the ability to get accurate answers to queries in a normalized format without having to wade through UI distraction like ads and social media.

It's the opposite of finding an answer on reddit, insta, tvtropes.

I can't wait for the first distraction free OS that is a thinking and imagination helper and not a consumption device where I have to block urls on my router so my kids don't get sucked into a skinners box.

I love being able to get answers from documentation and work questions without having to wade through some arbitrary UI bs a designer has implemented in adhoc fashion.

leptons · 24 days ago
I don't find the "AI" answers all that accurate, and in some cases they are bordering on a liability even if way down below all the "AI" slop it says "AI responses may include mistakes".

>It's the opposite of finding an answer on reddit, insta, tvtropes.

Yeah it really is because I can tell when someone doesn't know the topic well on reddit, or other forums, but usually someone does and the answer is there. Unfortunately the "AI" was trained on all of this, and the "AI" is just as likely to spit out the wrong answer as the correct one. That is not an improvement on anything.

> wade through UI distraction like ads and social media

Oh, so you think "AI" is going to be free and clear forever? Enjoy it while it lasts, because these "AI" companies are in way over their heads, they are bleeding money like their aorta is a fire hose, and there will be plenty of ads and social whatever coming to brighten your day soon enough. The free ride won't go on forever - think of it as a "loss leader" to get you hooked.

margalabargala · 24 days ago
I agree with the whole first half, but I disagree that LLM usage is doomed to ad-filled shittyness. AI companies may be hemmoraging money, but that's because their product costs so much to run; it's not like they don't have revenue. The thing that will bring profitability isn't ads, it will be innovations that let current-gen-quality LLMs run at a fraction of the electricity and power cost.

Will some LLMs have ads? Sure, especially at a free tier. But I bet the option to pay $20/month for ad-free LLM usage will always be there.

LtWorf · 24 days ago
"accurate"