Readit News logoReadit News
1attice · 2 years ago
One of the things that I can't help but notice as I read the comments here on HN is the extent to which we _want_ AI to be, if not a deity, than at least a grand new chapter for humanity; we want this even more than we want the new chapter to be _good_.

The desire has a narratival, almost religious quality -- I'm reminded of _Palo Alto_ https://www.nytimes.com/2023/02/14/books/malcolm-harris-palo... (Malcolm Harris, 2023)

What I wish Alang had explored more was just this phenomenon; it worries me far more than paperclip maximization. Yet he does a good job characterizing the underlying social context, i.e. the Anderseenian view that all problems are soluble in sufficient technology (a position which, of course, is vacuously true -- the _real_ issue is what the cost is. For example, a universe-sized machine of infinite complexity could do absolutely anything; the cost, of course, is absolutely everything.)

A machine that makes art and does taxes is one thing. A machine that _is tendentiously presented by its creators as divine_ is a theohazard.

elzbardico · 2 years ago
I believe there's a very mundane explanation for that. AI is the new gold rush in the startup world. People are funding AI companies, working for AI companies, they need to believe all the hype.
BriggyDwiggs42 · 2 years ago
I think the issue is more a lack of alternatives.
m3047 · 2 years ago
The only other comment here at this point is "Not worth a read.": is reading itself worth it? Is being human worth it?

It's long, it's philosophical. It's not something I'd normally read specifically with the objective of enabling commenting intelligently on HN. Rather it's something I'd read, knowing others had read it, to share our thoughts over beers in a setting quiet enough and free from distractions that we could hear ourselves think; or maybe during a long walk in the woods.

If the title is misleading it is only because it itself is so short and the piece is so long.

sandspar · 2 years ago
The article is shallow and offers nothing new. It reads like a very well written freshman essay. If you're impressed by offhand references to Derrida then you might enjoy it. The author also bizarrely says that AI can "only" do data analysis and hence isn't very useful. A dismissive characterization like, "it's only more efficient at handling spreadsheets", as if this isn't world shattering on its own.
1attice · 2 years ago
philosophy MA here.

The point about derrida was actually pretty good -- I've myself reflected that Derrida, Deleuze and a couple of other French poststructuralists got weirdly close to the same basic ideas that power LLMs (high-dimensional spaces, etc.) The Derridan trope that always hits me when I see AI art is the 'virtual image', which, IIRC, is about as close to 'the latent space' as I could hope to find in source materials so far apart.

Which is impressive, and, in a way, helps support the LLM/transformer paradigm -- if we're seeing results that were carefully deduced sixty years ago with a wildly different methodology, it suggests that both Derrida and ML research are on, if not the right track, than at least, a shared track; that's in itself a result.

If you assume that any use of the word 'Derrida' is a form of education-signalling, you're probably taking a cognitive shortcut.

add-sub-mul-div · 2 years ago
> is reading itself worth it?

People who are made uncomfortable by the idea that AI might be overhyped (to the extent of making an empty passive aggressive comment like that) might be the same people who don't think reading is worthwhile, hence the desire to be saved from having to process a substantive work themself.

NikkiA · 2 years ago
shrug I think AI is overhyped, AND I felt that the article was pretentious wankery not worth reading.
coldtea · 2 years ago
People say it's "not worth reading" or "shallow" but will happily gulp down what's essentially PR articles or marketing material for AI companies...
tim333 · 2 years ago
Goodness it waffles on a bit but cutting to the last paragraph he says in ten or twenty years AI and the world won't be much different. Which seems pretty dubious to me being more of the Wait But Why school of thought.

(https://waitbutwhy.com/2015/01/artificial-intelligence-revol... If you ^F for "cute" you'll see roughly the take of the author)

The art is cool.

Engineering-MD · 2 years ago
Wait but why is great, but I always thought it ignores sigma curves and diminishing returns which come for all things
tim333 · 2 years ago
Yeah, a simplification in some ways no doubt.

One thing that's seemed surprisingly smooth is the increase in compute per dollar https://149909199.v2.pressablecdn.com/wp-content/uploads/201...

You'd think it'd sigma curve but for more than a century on it goes, with no immediate sign of stopping.

For most of that time improved computation hasn't been very noticeable - a 2010 computer does much the same as a 2000 computer. However the point around now when they overtake us is kind of a big deal. The switch from biological life to tech processing being the smartest things in our bit of the universe for the first and only time since the big bang.

Looking back we had our ancestors reproducing and dying for a trillion generations or so since unicellular life, going forward our virtual descendants being immortal. I think the author has it wrong with nothing much changing. (1trn source https://www.quora.com/How-many-generations-lie-between-me-an...)

I've always thought death rather depressing especially with family and friends so it seems a change for the better to me.

Deleted Comment

_wire_ · 2 years ago
> The classic story here is that of an AI system whose only—seemingly inoffensive—goal is making paper clips. According to Bostrom, the system would realize quickly that humans are a barrier to this task, because they might switch off the machine.

I am at a loss to understand how this agent of doom (the AI not Bostrom) can be both "intelligent" and not understand that there are enough paperclips.

Unless I assume the argument rests on the word intelligent being meaningless.

But go on...

jarpschope · 2 years ago
Would a superintelligence reach the conclusion that humans are a cancer on earth that must be destroyed? That's a better example at the core of the alignment issue. Some values that humans hold in high regard, like the continued existence of billions of humans on earth, may not be there in a non-human-biased superintelligence.
_wire_ · 2 years ago
> Would a superintelligence reach the conclusion that humans are a cancer on earth that must be destroyed?

Does that seem intelligent? According to the imitation game?

rcxdude · 2 years ago
AI safety researchers generally define intelligence in terms of ability to reach goals (which certainly not a good general definition, but a useful one in this context). Intelligence is independent of what that goal is, and intelligent entities including humans don't generally choose their root goals, only intermediate ones. A super-intelligent paperclip maximiser would likely realise that humans don't actually want this many paperclips, but do it anyway, if that's the goal that it's set (it's important not to anthropomorphize such an entity too much: humans tend to have a complex set of goals with some balancing between them that will generally avoid such a single-minded approach. But an intelligent machine needn't have that).

(LLMs, at least, seem not to suffer much from this. In fact they're pretty hard to direct in general, and mimic a lot of the human elements in their dataset. So at the moment I don't think they're the kind of thing that will result in such an entity. But I also don't think they're likely to result in a super intelligent machine: super knowledgable, maybe, but I don't expect superhuman ability to synthesise new insights from that knowledge)

ThrowawayR2 · 2 years ago
> "I am at a loss to understand how this agent of doom (the AI not Bostrom) can be both "intelligent" and not understand that there are enough paperclips."

Having offspring might be ill-advised for a variety of reasons (medical, financial, etc.) but that doesn't stop humans from being horny and producing offspring. If powerful drives can be encoded in to a sapient organism well below the level of conscious thought, perhaps similar drives can be present in an artificial sapient being.

_wire_ · 2 years ago
> If powerful drives can be encoded in to a sapient organism

The article says nothing about this. Is anyone demonstrating sentient synthetic life? This idea has nothing to do with the technology at hand.

But my point is definitional regarding the word "intelligence".

I'd accept a counter argument that humanity is already wrecking global ecosystems by making too many "paperclips," and given that the only measure we have for AI is the imitation game, then doom QED.

But this amounts to a diagnosis of physician heal thyself. "Dr. it hurts when I do this!"

iamleppert · 2 years ago
Need an AI to summarize the article for me. Too long. Please stop posting such large articles to my HN page. Thanks.
amai · 2 years ago
To distinguish a false from a real god one at first has to know the features of a real god. The article is not really explaining the ladder so I think it cannot conclude that AI is a false god.