He seems to share your sentiment
He seems to share your sentiment
And we should start considering on what makes us humans and how we can valorize our common ground.
...and that's it! Turns out the hard part is not riding a bike but riding a bike in a straight line. Once you've got the hang of riding wherever the bike seems to want to go, you can gradually learn to get it under control. Surprisingly easy!
This is a serious question. If it's possible for an A.I. to be "dishonest", then how do you know when it's being honest? There's a deep epistemological problem here.
I think Alan Kay said it best - what we’ve done with these things is hacked our own language processing. Their behaviour has enough in common with something they are not, we can’t tell the difference.
Ideally voices that don’t have a vested interest.
For example, give a superintelligence some money, tell it to start a company. Surely it’s going to quickly understand it needs to manipulate people to get them to do the things it wants, in the same way a kindergarten teacher has to “manipulate” the kids sometimes. Personally I can’t see how we’re not going to find ourselves in a power struggle with these things.
Does that make me an AI doomer party pooper? So far I haven’t found a coherent optimistic analysis. Just lots of very superficial “it will solve hard problems for us! Cure disease!”
It certainly could be that I haven’t looked hard enough. That’s why I’m asking.
Why can’t your protocol just be valid JavaScript too? this.name = “string”; instead of mixing so many metaphors?
It is.
Among his notable accomplishments, he and coauthors mathematically characterized the propagation of signals through deep neural networks via techniques from physics and statistics (mean field and free probability theory). Leading to arguably some of the most profound yet under-appreciated theoretical and experimental results in ML in the past decade. For example see “dynamical isometry” [1] and the evolution of those ideas which were instrumental in achieving convergence in very deep transformer models [2].
After reading this post and the examples given, in my eyes there is no question that this guy has an extraordinary intuition for optimization, spanning beyond the boundaries of ML and across the fabric of modern society.
We ought to recognize his technical background and raise this discussion above quibbles about semantics and definitions.
Let’s address the heart of his message, the very human and empathetic call to action that stands in the shadow of rapid technological progress:
> If you are a scientist looking for research ideas which are pro-social, and have the potential to create a whole new field, you should consider building formal (mathematical) bridges between results on overfitting in machine learning, and problems in economics, political science, management science, operations research, and elsewhere.
[1] Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
http://proceedings.mlr.press/v80/xiao18a/xiao18a.pdf
[2] ReZero is All You Need: Fast Convergence at Large Depth
https://books.google.co.uk/books/about/Tracts_N_50_Antidote_...
For me personally, even light switches have been a huge tell in the past, so basically almost anything electrical.
I've always held the utterly unscientific position that this is because the brain only has enough GPU cycles to show you an approximation of what the dream world looks like, but to actually run a whole simulation behind the scenes would require more FLOPs than it has available. After all, the brain also needs to run the "player" threads: It's already super busy.
Stretching the analogy past the point of absurdity, this is a bit like modern video game optimizations: the mountains in the distance are just a painting on a surface, and the remote on that couch is just a messy blur of pixels when you look at it up close.
So the dreaming brain is like a very clever video game developer, I guess.