Readit News logoReadit News
gurkendoktor commented on DeepMind: A Generalist Agent   deepmind.com/publications... · Posted by u/extr
idiotsecant · 3 years ago
>Then what?

Growing tomatoes is less efficient than buying them, regardless of your metric. If you just want really cleanly grown tomatoes, you can buy those. If you want cheap tomatoes, you can buy those. If you want big tomatoes, you can buy those.

And yet individual people still grow tomatoes. Zillions of them. Why? Because we are inherently over-evolved apes who like sweet juicy fruits. The key to being a successful human in the post-scarcity AI overlord age is to embrace your inner ape and just do what makes you happy, no matter how simple it is.

The real insight out of all this is that the above advice is also valid even if there are no AI overlords.

gurkendoktor · 3 years ago
Humans are great at making up purpose where there is absolutely none, and indeed this is a helpful mechanism for dealing with post-scarcity.

The philosophical problem that I see with the "AI overlord age" (although not directly related to AI) is that we'll then have the technology to change the inherent human desires you speak of, and at that point growing tomatoes just seems like a very inefficient way of satisfying a reward function that we can change to something simpler.

Maybe we wouldn't do it precisely because it'd dissolve the very notion of purpose? But it does feel to me like destroying (beating?) the game we're playing when there is no other game out there.

(Anyway, this is obviously a much better problem to face than weaponized use of a superintelligence!)

gurkendoktor commented on DeepMind: A Generalist Agent   deepmind.com/publications... · Posted by u/extr
londons_explore · 3 years ago
> Or is HN mostly happy with deprecating humanity because our replacement has more teraflops?

If we manage to make a 'better' replacement for ourselves, is it actually a bad thing? Our cousin's on the hominoid family tree are all extinct, yet we don't consider that a mistake. AI made by us could well make us extinct. Is that a bad thing?

gurkendoktor · 3 years ago
Your comment summarizes what I worry might be a more widespread opinion than I expected. If you think that human extinction is a fair price to pay for creating a supercomputer, then our value systems are so incompatible that I really don't know what to say.

I guess I wouldn't have been so angry about any of this before I had children, but now I'm very much in favor of prolonged human existence.

gurkendoktor commented on DeepMind: A Generalist Agent   deepmind.com/publications... · Posted by u/extr
fossuser · 3 years ago
Yeah, I'm not arguing alignment is not possible - but that we don't know how to do it and it's really important that we figure it out before we figure out AGI (which seems unlikely).

The ant example is just to try to illustrate the spectrum of intelligence in a way more people may understand (rather than just thinking of smart person and dumb person as the entirety of the spectrum). In the case of a true self-improving AGI the delta is probably much larger than that between an ant and a human, but at least the example makes more of the point (at least that was my goal).

The other common mistake is people think intelligence implies human-like thinking or goals, but this is just false. A lot of bad arguments from laypeople tend to be related to this because they just haven't read a lot about the problem.

gurkendoktor · 3 years ago
One avenue of hope for successful AI alignment that I've read somewhere is that we don't need most laypeople to understand the risks of it going wrong, because for once the most powerful people on this planet have incentives that are aligned with ours. (Not like global warming, where you can buy your way out of the mess.)

I really hope someone with very deep pockets will find a way to steer the ship more towards AI safety. It's frustrating to see someone like Elon Musk, who was publicly worried about this very specific issue a few years ago, waste his time and money on buying Twitter.

Edit: I'm aware that there are funds available for AI alignment research, and I'm seriously thinking of switching into this field, mental health be damned. But it would help a lot more if someone could change Eric Schmidt's mind, for example.

gurkendoktor commented on DeepMind: A Generalist Agent   deepmind.com/publications... · Posted by u/extr
fossuser · 3 years ago
The closer we get, the more alarming the alignment problem becomes.

https://intelligence.org/2017/10/13/fire-alarm/

Even people like Eric Schmidt seem to downplay it (in a recent podcast with Sam Harris) - just saying “smart people will turn it off”. If it thinks faster than us and has goals not aligned with us this is unlikely to be possible.

If we’re lucky building it will have some easier to limit constraint like nuclear weapons do, but I’m not that hopeful about this.

If people could build nukes with random parts in their garage I’m not sure humanity would have made it past that stage. People underestimated the risks with nuclear weapons initially too and that’s with the risk being fairly obvious. The nuanced risk of unaligned AGI is a little harder to grasp even for people in the field.

People seem to model it like a smart person rather than something that thinks truly magnitudes faster than us.

If an ant wanted to change the goals of humanity, would it succeed?

gurkendoktor · 3 years ago
To be fair, ants have not created humanity. I don't think it's inconceivable for a friendly AI to exist that "enjoys" protecting us in the way a friendly god might. And given that we have AI (well, language models...) that can explain jokes before we have AI that can drive cars, AI might be better at understanding our motives than the stereotypical paperclip maximizer.

However, all of this is moot if the team developing the AI does not even try to align it.

gurkendoktor commented on DeepMind: A Generalist Agent   deepmind.com/publications... · Posted by u/extr
gcheong · 3 years ago
I don’t know if we could sufficiently prepare ourselves for such a world. It would seem almost as if we have to build it first so it could determine the best way to prepare us.
gurkendoktor · 3 years ago
For one thing, we could try to come up with safety measures that prevent the most basic paperclip maximizer disaster from happening.

At this point I almost wish it was still the military that makes these advances in AI, not private companies. Anyone working on a military project has to have some sense that they're working on something dangerous.

gurkendoktor commented on DeepMind: A Generalist Agent   deepmind.com/publications... · Posted by u/extr
hans1729 · 3 years ago
I’m not sure how to word my excitement about the progress we see in AI research in the last years. If you haven’t read it, give Tim Urbans classic piece a slice of your attention: https://waitbutwhy.com/2015/01/artificial-intelligence-revol...

It’s a very entertaining read from a couple of years ago (I think I’ve read it in 2017), and man, have things happened in the field since then. If feels like things truly start coming together. Transformers and then some incremental progress look like a very, very promising avenue. I deeply wonder in which areas this will shape the future more than we are able to anticipate beforehand.

gurkendoktor · 3 years ago
Not you specifically, but I honestly don't understand how positive many in this community (or really anyone at all) can be about these news. Tim Urban's article explicitly touches on the risk of human extinction, not to mention all the smaller-scale risks from weaponized AI. Have we made any progress on preventing this? Or is HN mostly happy with deprecating humanity because our replacement has more teraflops?

Even the best-case scenario that some are describing, of uploading ourselves into some kind of post-singularity supercomputer in the hopes of being conscious there, doesn't seem very far from plain extinction.

gurkendoktor commented on “I don't know the numbers”: a math puzzle   alexanderell.is/posts/num... · Posted by u/otras
icambron · 3 years ago
This problem is a variation of this one: https://en.m.wikipedia.org/wiki/Sum_and_Product_Puzzle, sometimes called “The Impossible Puzzle”
gurkendoktor · 3 years ago
Ooohh, I didn't know this name for it, thanks. I somehow came across it as The Sultan's Riddle on the internet:

https://explainextended.com/2016/12/31/happy-new-year-8/

I prefer this less sterile framing of it. It was the most fun that I ever had with a puzzle, so to anyone scrolling around on this page, I would recommend not jumping straight to the solution :)

gurkendoktor commented on Google Docs will “warn you away from inappropriate words”   twitter.com/pmarca/status... · Posted by u/memish
shadowgovt · 3 years ago
Nothing is being censored here; it's a simple recommendation. If they start censoring personal correspondence, they open up a huge opportunity for another option to disrupt them. It's not an impossible scenario, but it's an unlikely (and more importantly: self-correcting) scenario.
gurkendoktor · 3 years ago
Self-correcting because frustrated users will simply start their own Google? Even if that happens, their second generation of employees will start a revolt if their company doesn't follow the latest DIE best practices.

I honestly think that only the Russian/Chinese model of a nationalized IT ecosystem has a chance to resist these trends.

gurkendoktor commented on Ask HN: Why is Substack so popular?    · Posted by u/skilled
michaelt · 3 years ago
According to [1] anti-vaxxers "have flocked to Substack, podcasting platforms and a growing number of right-wing social media networks over the past year after getting kicked off or restricted on Facebook, Twitter and YouTube."

If you're interested in censored material more broadly, you might be interested in the ALA's most challenged books lists [2] which include such classics as "Of Mice and Men" and "Adventures of Huckleberry Finn" along with books as widely loved as "Harry Potter" and "James and the Giant Peach"

[1] https://www.washingtonpost.com/technology/2022/01/27/substac... [2] https://www.ala.org/advocacy/bbooks/frequentlychallengedbook... https://www.ala.org/advocacy/bbooks/frequentlychallengedbook...

gurkendoktor · 3 years ago
Here's what I would consider a boring example of such anti-vaxx heresy:

https://www.eugyppius.com/p/maximum-vaccination

According to this Guardian article, the "Center for Countering Digital Hate" (also mentioned in your article) would be happy if Substack hadn't given it the platform:

https://www.theguardian.com/technology/2022/jan/27/anti-vaxx...

The idea that Yet Another Generic NGO needs to "counter hate" because someone on the internet thinks the vaccine is not going to stop the pandemic is completely bonkers.

What I want to say is, a lot of the controversy around Substack seems to be that it is not aligned with the Right Side in the Culture War. I think they have some great writers, but they're not the next WikiLeaks or anything.

Deleted Comment

u/gurkendoktor

KarmaCake day3602August 11, 2011View Original