Readit News logoReadit News
iaiuse commented on AI in Search is driving more queries and higher quality clicks   blog.google/products/sear... · Posted by u/thm
iaiuse · 18 days ago
You’re raising a very important concern — the slow disappearance of human-curated knowledge niches. While AI can summarize the obvious and the popular, it struggles to preserve the quirky, community-driven, and idiosyncratic corners of the early internet. Forums and specialty sites were full of experiments, debates, and lived experiences — not just canonical facts.

If we don’t actively archive, incentivize, or reimagine those spaces, AI-generated content may become a sterile echo chamber of what’s “most likely,” not what’s most interesting. The risk isn’t that knowledge disappears — it’s that flavor, context, and dissent do.

iaiuse commented on Slopsquatting   en.wikipedia.org/wiki/Slo... · Posted by u/gregnavis
ysofunny · 19 days ago
all I want to figure out

is how to "manually" (semi-manually) tweak the LLMs parameters so we can alter what it 'knows for sure'

is this doable yet??? or is this one of those questions whose answer is best kept behind NDAs and other such practices?

iaiuse · 19 days ago
Correct—they don’t “know” in the epistemic sense, but they do encode a latent world model that shows up as useful priors.

Put differently: GPT-4 isn’t a knowledge base, it’s a *Bayesian autocomplete* over dense vectors. That’s why it can draft Python faster than many juniors, yet fail a trivial chain-of-thought step if the token path diverges.

The trick in production is to sandwich it: retrieval (facts) LLM (fluency) rule checker (logic). Without that third guardrail, you’re betting on probability mass, not truth.

iaiuse commented on Teacher AI use is already out of control and it's not ok   reddit.com/r/Teachers/com... · Posted by u/jruohonen
Invictus0 · 19 days ago
People need to realize that the next generation of kids is already unable to differentiate human vs llm generated text, and not only that, but they don't even mind it. They are already using LLMs to generate all their text and so they don't mind reading LLM generated text either.
iaiuse · 19 days ago
LLMs will mediate plenty of routine text, but the choke-point shifts from “writing” to “prompting + validating”.

In client projects we see two hard costs pop up: 1. Human review time ⟶ still 2–4 min per 1 k tokens because hallucination isn’t solved. 2. Inference \$: for a 70 B model at 16 k context you pay ~\$0.12 per 1 k tokens — cheap for generation, expensive for bulk reading.

So yes, AI will read for us, but whoever owns the *attention budget + validation loop* still controls comprehension. That’s where new leverage lives.

iaiuse commented on I'm Archiving Picocrypt   github.com/Picocrypt/Pico... · Posted by u/jaden
uludag · 19 days ago
I've felt similar to the author, a sort of despair that the only point of writing software now is to prop up the valuation of AI companies, that quality no longer matters, etc.

Then I realized that nothings stopping me from writing software how I want and feel is best. I've stopped using LLMs completely and couldn't be happier. I'm not even struggling at work or feeling like I'm behind. I work on a number of personal projects too, all without LLMs, and I couldn't feel better.

iaiuse · 19 days ago
MIT isn’t “weak” because it allows LLM training; it’s weak because it puts zero obligations on the recipient.

Blocking “LLM training” in a license feels satisfying, but I’ve run into three practical issues while benchmarking models for clients:

1. Auditability — You can grep for GPL strings; you can’t grep a trillion-token corpus to prove your repo wasn’t in it. Enforcement ends up resting on whistle-blowers, not license text.

2. Community hard-forks — “No-AI” clauses split the ecosystem. Half the modern Python stack depends on MIT/BSD; if even 5 % flips to an LLM-ban variant, reproducible builds become a nightmare.

3. Misaligned incentives — Training is no longer the expensive part. At today’s prices a single 70 B checkpoint costs about \$60 k to fine-tune, but running inference at scale can exceed that each day. A license that focuses on training ignores the bigger capture point.

A model company that actually wants to give back can do so via attribution, upstream fixes, and funding small maintainers (things AGPL/SSPL rarely compel). Until we can fingerprint data provenance, social pressure—or carrot contracts like RAIL terms—may move the needle more than another GPL fork.

Happy to be proven wrong; I’d love to see a case where a “no-LLM” clause was enforced and led to meaningful contributions rather than a silent ignore.

u/iaiuse

KarmaCake day1August 6, 2025
About
Bilingual AI blogger & ex-IBM BA. Writing daily about LLM cost economics, prompt engineering, and AI adoption in Asia. Blog: https://iaiuse.com/en Twitter: @iaiuse | Email: youtube (at) iaiuse (dot) com
View Original