Readit News logoReadit News
qarl commented on If AI replaces workers, should it also pay taxes?   english.elpais.com/techno... · Posted by u/PaulHoule
qarl · 3 days ago
Yes, but they only count as 3/5 a person.
qarl commented on Thousands of U.S. farmers have Parkinson's. They blame a deadly pesticide   mlive.com/news/2025/12/th... · Posted by u/bikenaga
Permit · 4 days ago
> You should consider dropping that instinct.

This is the reason we have people mistakenly repeating the conclusion that AI consumes huge amounts of water comparable to that of entire cities.

If you make any other assumption than "I don't know what's happening here and need to learn more" you'll constantly be making these kind of errors. You don't have to have an opinion on every topic.

Edit: By the way, I also don't think we should trust big companies indiscriminately. Like, we could have a system for pesticide approval that errs on the side of caution: We only permit pesticides for which there is undisputed evidence that the chemicals do not cause problems for humans/animals/other plants etc.

qarl · 3 days ago
Not at all. NOT AT ALL.

There are shades of gray here. But you are absolutely not required to extend benefit of the doubt to entities that have not earned it. That's a recipe for disaster.

Personally, I find myself to be incredibly biased against corporations over people. I've met a lot of people in my life, they seem mostly nice if a bit stupid. Well intentioned. Selfish.

Are corporations mostly well intentioned? Well, consider that some people tried to put "good intentions" into corporations bylaws and has been viciously resisted.

Corporations will happily take everything you have if you accidentally give it to them. Actual human beings aren't like that.

qarl commented on Language models are injective and hence invertible   arxiv.org/abs/2510.15511... · Posted by u/mazsa
discreteevent · 2 months ago
If we use gzip to compress a calculus textbook does that mean that gzip understands calculus?
qarl · 2 months ago
To a small degree, yes. GZIP knows that some patterns are more common in text than others - that understanding allows it to compress the data.

But that's a poor example of what I'm trying to convey. Instead consider plotting the course of celestial bodies. If you don't understand, you must record all the individual positions. But if you do, say, understand gravity, a whole new level of compression is possible.

qarl commented on Language models are injective and hence invertible   arxiv.org/abs/2510.15511... · Posted by u/mazsa
spuz · 2 months ago
I remember hearing an argument once that said LLMs must be capable of learning abstract ideas because the size of their weight model (typically GBs) is so much smaller than the size of their training data (typically TBs or PBs). So either the models are throwing away most of the training data, they are compressing the data beyond the known limits, or they are abstracting the data into more efficient forms. That's why an LLM (I tested this on Grok) can give you a summary of chapter 18 of Mary Shelley's Frankenstein, but cannot reproduce a paragraph from the same text verbatim.

I am sure I am not understanding this paper correctly because it sounds like they are claiming that model weights can be used to produce the original input text representing an extraordinary level of text compression.

qarl · 2 months ago
> they are compressing the data beyond the known limits, or they are abstracting the data into more efficient forms.

I would argue that this is two ways of saying the same thing.

Compression is literally equivalent to understanding.

qarl commented on Karpathy on DeepSeek-OCR paper: Are pixels better inputs to LLMs than text?   twitter.com/karpathy/stat... · Posted by u/JnBrymn
qarl · 2 months ago
Hm.

When I think to myself, I hear words stream across my inner mind.

It's not pages of text. It's words.

Deleted Comment

qarl commented on The case against social media is stronger than you think   arachnemag.substack.com/p... · Posted by u/ingve
rkomorn · 3 months ago
They're unfortunately not much more capable of responsibly connecting with people non-anonymously, I'd say.

See examples like finding someone's employer on LinkedIn to "out" the employee's objectionable behavior, doxxing, or to the extreme, SWATing, etc.

qarl · 3 months ago
Yeah. People use their real identities on Facebook, and it doesn't help a bit.
qarl commented on ChatGPT Developer Mode: Full MCP client access   platform.openai.com/docs/... · Posted by u/meetpateltech
ch4s3 · 3 months ago
I'm not belittling it, in fact I pointed to place where they work well. I just don't see how in this case it adds much over the other products I mentioned that in some cases offer similar layering with a different UX. It still doesn't really do anything to help with style cohesion across assets or the nondeterminism issues.
qarl · 3 months ago
Hm. It seemed like you were belittling it. Still seems that way.
qarl commented on ChatGPT Developer Mode: Full MCP client access   platform.openai.com/docs/... · Posted by u/meetpateltech
ch4s3 · 3 months ago
I'm looking at their blog[1] and yeah it looks like they're doing literally the exact same thing the other tools I named are doing but with a UI inspired by things like shader pipeline tools in game engines. It isn't clear how it's doing all of the things the grandparent is claiming.

[1]https://blog.comfy.org/p/nano-banana-via-comfyui-api-nodes

qarl · 3 months ago
There's no need to belittle dataflow graphs. They are quite a nice model in many settings. I daresay they might be the PERFECT model for networks of agents. But time will tell.

Think of it this way: spreadsheets had a massive impact on the world even though you can do the same thing with code. Dataflow graph interfaces provide a similar level of usefulness.

Deleted Comment

u/qarl

KarmaCake day592January 29, 2016
About
qarl@qarl.com
View Original