Readit News logoReadit News
tablatom commented on AI World Clocks   clocks.brianmoore.com/... · Posted by u/waxpancake
ghurtado · a month ago
In lucid dreams there's a whole category of things like this: reading a paragraph of text, looking at a clock (digital or analog), or working any kind of technology more complex than a calculator.

For me personally, even light switches have been a huge tell in the past, so basically almost anything electrical.

I've always held the utterly unscientific position that this is because the brain only has enough GPU cycles to show you an approximation of what the dream world looks like, but to actually run a whole simulation behind the scenes would require more FLOPs than it has available. After all, the brain also needs to run the "player" threads: It's already super busy.

Stretching the analogy past the point of absurdity, this is a bit like modern video game optimizations: the mountains in the distance are just a painting on a surface, and the remote on that couch is just a messy blur of pixels when you look at it up close.

So the dreaming brain is like a very clever video game developer, I guess.

tablatom · a month ago
Wait, lucid dreamers need tells to know where they are?!?
tablatom commented on Andrej Karpathy – It will take a decade to work through the issues with agents   dwarkesh.com/p/andrej-kar... · Posted by u/ctoth
hackerdood · 2 months ago
You’ve probably already listened to it but in the event you haven’t: https://podcasts.apple.com/us/podcast/freakonomics-radio/id3...

He seems to share your sentiment

tablatom · 2 months ago
The link is broken - could you repost please?
tablatom commented on Andrej Karpathy – It will take a decade to work through the issues with agents   dwarkesh.com/p/andrej-kar... · Posted by u/ctoth
Bengalilol · 2 months ago
I am thinking the same.

And we should start considering on what makes us humans and how we can valorize our common ground.

tablatom · 2 months ago
This. I believe it’s the most important question in the world right now. I’ve been thinking long and hard about this from an entirely practical perspective and have surprised myself that the answer seems to be our capacity to love. The idea is easily dismissed as romantic but when I say I’m being practical I really mean it. I’m writing about it here https://giftcommunity.substack.com/
tablatom commented on Take the pedals off the bike   fortressofdoors.com/take-... · Posted by u/bemmu
tablatom · a year ago
There's a better way but it requires a very large space like a big empty parking lot.

...and that's it! Turns out the hard part is not riding a bike but riding a bike in a straight line. Once you've got the hang of riding wherever the bike seems to want to go, you can gradually learn to get it under control. Surprisingly easy!

tablatom commented on Alignment faking in large language models   anthropic.com/research/al... · Posted by u/adultorata
md224 · a year ago
But what if it's only faking the alignment faking? What about meta-deception?

This is a serious question. If it's possible for an A.I. to be "dishonest", then how do you know when it's being honest? There's a deep epistemological problem here.

tablatom · a year ago
Came to the comments looking for this. The term alignment-faking implies that the AI has a “real” position. What does that even mean? I feel similarly about the term hallucination. All it does is hallucinate!

I think Alan Kay said it best - what we’ve done with these things is hacked our own language processing. Their behaviour has enough in common with something they are not, we can’t tell the difference.

tablatom commented on Sequence to sequence learning with neural networks: what a decade   youtube.com/watch?v=YD-9N... · Posted by u/dspoka
tablatom · a year ago
Any recommendations for thinkers writing good analysis on the implications of superintelligence for society? Especially interested in positive takes that are well thought through. Are there any?

Ideally voices that don’t have a vested interest.

For example, give a superintelligence some money, tell it to start a company. Surely it’s going to quickly understand it needs to manipulate people to get them to do the things it wants, in the same way a kindergarten teacher has to “manipulate” the kids sometimes. Personally I can’t see how we’re not going to find ourselves in a power struggle with these things.

Does that make me an AI doomer party pooper? So far I haven’t found a coherent optimistic analysis. Just lots of very superficial “it will solve hard problems for us! Cure disease!”

It certainly could be that I haven’t looked hard enough. That’s why I’m asking.

tablatom commented on Company claims 1k% price hike drove it from VMware to open source rival   arstechnica.com/informati... · Posted by u/elorant
tablatom · a year ago
Off topic but if I may. The way people use percentages to express multiples is confusing. A doubling is a 100% increase. So a 200% increase is 3x, and so on. Then at some point we forget about the +1 and 1000% is "10 times the sum it previously paid for software licenses". Just a pet peeve I guess : )
tablatom commented on U of T computational imaging researchers harness AI to fly with light in motion   web.cs.toronto.edu/news-e... · Posted by u/croes
tablatom · a year ago
I thought I could only see photons that hit my retina :)
tablatom commented on JSON Patch   zuplo.com/blog/2024/10/10... · Posted by u/DataOverload
hyperhello · a year ago
What’s nice about JSON is that it’s actually valid JavaScript, with some formal specification to avoid any nasty circles or injections.

Why can’t your protocol just be valid JavaScript too? this.name = “string”; instead of mixing so many metaphors?

tablatom · a year ago
> Why can’t your protocol just be valid JavaScript too?

It is.

tablatom commented on Too much efficiency makes everything worse (2022)   sohl-dickstein.github.io/... · Posted by u/feyman_r
refibrillator · a year ago
I recognize the author Jascha as an incredibly brilliant ML researcher, formerly at Google Brain and now at Anthropic.

Among his notable accomplishments, he and coauthors mathematically characterized the propagation of signals through deep neural networks via techniques from physics and statistics (mean field and free probability theory). Leading to arguably some of the most profound yet under-appreciated theoretical and experimental results in ML in the past decade. For example see “dynamical isometry” [1] and the evolution of those ideas which were instrumental in achieving convergence in very deep transformer models [2].

After reading this post and the examples given, in my eyes there is no question that this guy has an extraordinary intuition for optimization, spanning beyond the boundaries of ML and across the fabric of modern society.

We ought to recognize his technical background and raise this discussion above quibbles about semantics and definitions.

Let’s address the heart of his message, the very human and empathetic call to action that stands in the shadow of rapid technological progress:

> If you are a scientist looking for research ideas which are pro-social, and have the potential to create a whole new field, you should consider building formal (mathematical) bridges between results on overfitting in machine learning, and problems in economics, political science, management science, operations research, and elsewhere.

[1] Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks

http://proceedings.mlr.press/v80/xiao18a/xiao18a.pdf

[2] ReZero is All You Need: Fast Convergence at Large Depth

https://arxiv.org/pdf/2003.04887

tablatom · a year ago
Interesting timing for me! Just a couple of days ago I discovered the work of biologist Olivier Hamant who has been raising exactly this issue. His main thesis is that very high performance (which he defines as efficacy towards a known goal plus efficiency) and very high robustness (the ability to withstand large fluctuations in the system) are physically incompatible. Examples abound in nature. Contrary to common perception evolution does not optimise for high performance but high robustness. Giving priority to performance may have made sense in a world of abundant resources, but we are now facing a very different period where instability is the norm. We must (and will be forced to) backtrack on performance in order to become robust. It’s the freshest and most interesting take on the poly-crisis that I’ve seen in a long time.

https://books.google.co.uk/books/about/Tracts_N_50_Antidote_...

u/tablatom

KarmaCake day653August 20, 2009View Original