Readit News logoReadit News
dmvaldman commented on In defense of flat earthers (2020)   danboykis.com/posts/flat-... · Posted by u/john-doe
dmvaldman · 4 years ago
i have a different take. i'm very glad flat earthers exist. in general i would hope the population of people who believe an idea be proportional to the probability of its truth. so even the wildest ideas should have some modicum of support. consider a world without this. i would imagine it would necessarily have to be thought-policed. i believe this is how we should frame this discussion.

what i think is the issue is that we have a broadcasting machine (social media, news, etc) that works on sensationalism. so you are always hearing about fringe ideas and given no signaling to the size of the population that supports it.

dmvaldman commented on StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery   github.com/orpatashnik/St... · Posted by u/giorgiop
dmvaldman · 4 years ago
language will be the next interface to software. to get software to do something, you will simply ask it. this work is an example.

i've been documenting this theme in a twitter thread here https://twitter.com/dmvaldman/status/1358916558857269250

dmvaldman commented on OpenAI API   beta.openai.com/... · Posted by u/gdb
azinman2 · 5 years ago
Zero-shot learning is a way of essentially building classifiers. There's no reasoning, there's no planning, there's no commonsense knowledge (not in a comprehensive, deep way that we would look for it call it that), and there's no integration of these skills to solve common goals. You can't take GPT and say ok turn that into a robot that can clean my house, take care of my kids, cook dinner, and then be a great dinner guest companion.

If you really probe at GPT, you'll see anything that goes beyond an initial sentence or two really starts to show how it's purely superficial in terms of understanding & intelligence; it's basically a really amazing version of Searle's Chinese room argument.

dmvaldman · 5 years ago
I think this is generally a good answer, but keep in mind I said AGI "in text". My forecasting is that within 3 years you will be able to give arbitrary text commands and get the textual output of the equivalents of "clean my house, take care of my kids, ..." like problems.

I also would contend that there is reasoning happening and that zero-shot demonstrates this. Specifically, reasoning about the intent of the prompt. The fact that you get this simply by building a general-purpose text model is a surprise to me.

Something I haven't seen yet is a model simulate the mind of the questioner, the way humans do, over time (minutes, days, years).

In 3 years, I'll ping you :) Already made a calendar reminder

dmvaldman commented on OpenAI API   beta.openai.com/... · Posted by u/gdb
azinman2 · 5 years ago
And how do we get from zero shot to AGI? You're making gigantic leaps here.
dmvaldman · 5 years ago
what is the difference between zero-shot learning in text and AGI? not saying there isn't one, but can you state what it is?you can express any intent in text (unlike other media). to solve zero-shot in text is equivalent to the model responding to all intents.

many people have different definitions for AGI though. for me it clicked when i realized that text has this universality property of capturing any intent.

dmvaldman commented on OpenAI API   beta.openai.com/... · Posted by u/gdb
Barrin92 · 5 years ago
there's zero understanding in any of this. This is still just superficial text parsing essentially. Show me progress on Winograd schema and I'd be impressed. It hasn't got anything to do with AGI, this is application of ML to very traditional NLP problems.
dmvaldman · 5 years ago
i think you are assuming that what is happening under the hood is that a human-inputted sentence is being parsed into a grammar. it is not.
dmvaldman commented on OpenAI API   beta.openai.com/... · Posted by u/gdb
azinman2 · 5 years ago
What breakthrough occurred?
dmvaldman · 5 years ago
Zero shot and few-shot learning in GPT-3 and lack of significant diminishing returns in scaling text models. Zero-shot learning is equivalent to saying "i'm just going to ask the model something that it was not trained to do"
dmvaldman commented on OpenAI API   beta.openai.com/... · Posted by u/gdb
dmvaldman · 5 years ago
AGI in text is < 3yrs away.

u/dmvaldman

KarmaCake day1423December 6, 2009
About
Currently ruminating. Previously co-founder at Standard Cognition

twitter: @dmvaldman

View Original