This is a link to a Twitter announcing a YouTube coming in 1/2 hour. Does / did the YouTube contain the announcement?
Now that Twitter has gone subscriber only, we can't see any context around tweets. I think they should be DOA now when posted to HN as they do not allow for follow up or rtfaing.
I'm not sure what you mean by twitter going "subscriber only". I pay them nothing per month and I just opened this link in an incognito window with no trouble...
Not that I'm in favor of posts having links requiring login or paywalled, you can currently still workaround the twitter annoyance via s/twitter.com/nitter.net/:
This is massive, Rich is probably the singular guy that legitimately has a realistic and long term vision for AGI (and has been mostly correct his whole career)
Notably he left the US because he did not want to support the Iraq war and established Alberta as THE reinforcement learning hub since then. Also he did just expand the lab so I'm curious how this will be integrated.
Carmack + Sutton working on AGI tells me it is 100% going to happen.
Having watched the announcement the (literally) singular question that was presented was essentially "Why should the man on the street care about AGI now?", to which the answer was "They shouldn't". But key to that answer was Carmack's assertion that Keen will probably never release a "product" like ChatGPT that aims to specifically wow people, that's not Keen's goal.
Clearly Rich Sutton is a giant in AI for his contributions in RL, but his recent brief talk "AI Succession" (https://www.youtube.com/watch?v=NgHFMolXs3U) made me worry a bit about the sort of perspective he has on what the "good" outcome here looks like. I say this as someone who is generally optimistic about the promise of AGI. I have no love for the machines as a "species", and by Rich's definition here, yeah, I am "specist" in favor of humans. I think we should use technology for our own benefit, and that it's not inevitable that machines "replace" us.
I also think his framing of the counterarguments is not charitable. The serious AI-risk arguments do not argue that a super-intelligent AI will necessarily be evil. They only argue that its motivations will be unaligned with ours, that it will be more competent in achieving goals than us, and that this will be bad for humans as a side effect. I think a good comparison is humans building a highway that incidentally crushes an ant colony. They didn't set out on an evil mission to destroy ants because they hate them, it just happened as a side effect of something the humans wanted. No evil required.
I have studied Rich's RL book (both editions), and enjoy his work and occasional talks on YouTube. Carmack is obviously talented. In the referenced YouTube announcement Carmack basically said that the average person should not really care about what they are doing right now, and the way they are funded they can take their time, not rush out any public systems or projects, etc. - at least that is the way I interpreted what he said.
Off topic, but: I used to think AGI was likely, but probably not until maybe 2040. After doing a deep dive into LLMs in the last 2 years, and generally deep learning for the eight years before that, I now think there is some real chance of having something that I consider AGI by 2030. Lately LLMs have become a little useful for graph datastore, KGs, etc. I think the missing piece is the ability of LLMs to handle reasoning, and seeing GPT-4's ability to write Prolog code given a problem description, it doesn't strain my imagination too much to think a breakthrough for reasoning could be here in a year or less.
There are a fair amount of formal reasoning systems out there for various structured reasoning tasks. LLMs may not be able to reason, but they can specify things to be reasoned about in Gallina or OWL DT or whatever and defer. I think folks seem to think AGI will be a single model, while if you look at our primary example of human intelligence it’s many models interconnected. There’s no reason for, say, LLMs to have the ability to play chess or solve differential equations or recall facts or reason formally. These already have excellent solutions that you might as well use. It’s interesting to see how far they can go in these areas, but I’m not holding my breath for an LLM to be able to out math mathematica.
I think integrating better reasoning is just the next step. Thank you for considering my 2030 prediction to be “conservative.” That is refreshing for me.
In 1982 I came home from AAAI with a bumper sticker “AI, It’s for Real” that I embarrassingly had on my car for years.
Dude, carmack didn’t contribute to AI, but that doesn’t mean he didn’t contribute to anything. The guys brilliant at developing * with computers and can squeeze the utmost out of the least in very clever ways, but more than anything knows how to build and make things that work. Sutter brings the AI chops to the table. The deeply technical nerd that produces more before breakfast than most first quartile engineers at a FAANG in a career and the guy who essentially created the current AI techniques and mentored all the major contributors today.
Maybe true, but he's trying, investing his time and possibly his money in it. He could also just do nothing, it's cool that he's spending time thinking about it. What more could you ask for?
Yep. Similar to his aerospace venture. Reminds me of that guy who thinks the F-16 is the greatest fighter jet of all time and all air warfare will be WW2 gun battles. Carmack and Sutton are classical thinkers who did great work in the past, but are also stuck in it.
Deleted Comment
Now that Twitter has gone subscriber only, we can't see any context around tweets. I think they should be DOA now when posted to HN as they do not allow for follow up or rtfaing.
https://nitter.net/ID_AA_Carmack/status/1706420064956661867
This is massive, Rich is probably the singular guy that legitimately has a realistic and long term vision for AGI (and has been mostly correct his whole career)
Notably he left the US because he did not want to support the Iraq war and established Alberta as THE reinforcement learning hub since then. Also he did just expand the lab so I'm curious how this will be integrated.
Carmack + Sutton working on AGI tells me it is 100% going to happen.
Makes me really think that sometimes the great man theory has merit.
Yeah but if it's anything like the oculus it will take valve or htc to actually make it wow.
I also think his framing of the counterarguments is not charitable. The serious AI-risk arguments do not argue that a super-intelligent AI will necessarily be evil. They only argue that its motivations will be unaligned with ours, that it will be more competent in achieving goals than us, and that this will be bad for humans as a side effect. I think a good comparison is humans building a highway that incidentally crushes an ant colony. They didn't set out on an evil mission to destroy ants because they hate them, it just happened as a side effect of something the humans wanted. No evil required.
Off topic, but: I used to think AGI was likely, but probably not until maybe 2040. After doing a deep dive into LLMs in the last 2 years, and generally deep learning for the eight years before that, I now think there is some real chance of having something that I consider AGI by 2030. Lately LLMs have become a little useful for graph datastore, KGs, etc. I think the missing piece is the ability of LLMs to handle reasoning, and seeing GPT-4's ability to write Prolog code given a problem description, it doesn't strain my imagination too much to think a breakthrough for reasoning could be here in a year or less.
In 1982 I came home from AAAI with a bumper sticker “AI, It’s for Real” that I embarrassingly had on my car for years.
Seems like a fairly compelling combination to me.