Deleted Comment
Needless to say, I have zero fond memories of this program. Maybe these were nuances of our particular setup (many other such cases at that firm, sadly), but…eh, whatever. There’s better out there.
> If you can create more compute by simply putting more energy into the process, it could make economic sense to starve human beings in order to generate more and more AI... most governments seem likely to limit AI’s ability to hog energy
This is the most likely scenario, and indeed in that scenario saving humans from starvation requires government action. Some governments will do it, some won't. Those that do will be outcompeted by those that don't. Game over.
Comments like this are becoming far more common it seems amongst the tech community, to me anyway, that I really want an answer of what this hypothetical god-like entity is going to enable that also somehow will only be limited to a select group of people/nation/whatever and not spread throughout the rest of the world. It’s a weird dichotomy wherein “AGI” will somehow solve climate change, enable cold fusion, end human aging, spread us to the stars, but also inflict mass death, use all of the global energy if unchecked, and now, starve humans to achieve those things.
>The Incompleteness Theorem says that, given any consistent, computable set of axioms, there's a true statement about the integers that can never be proved from those axioms.
Upon reading something like this I immediately have questions like: if this is so, then how do we know that this statement about the integers is true at all? What does it mean for something to be true within a set of axioms when you can't prove it? Why don't we say that the truth of this statement, within those axioms, is undetermined? If we, on the outside, know that it's true, why can't we forcefully plug that truth back into the theory?
OK, I know that last one, you can do it but then you can also do an incompleteness proof for the new theory. But still, if the "problem" is only with self-referential statements, why can't we somehow isolate all self-referential statements and have a theory that's complete and consistent except for some caveats, which seems vastly better than just inconsistent, period?
Sorry if that makes no sense, I know this topic is famous for attracting cranky discourse.. It just feels like all the popular explanations stop just short of really grappling with the real weirdness of the theorem.
No offense, but if I’m reading your comment correctly you’re making it out that nobody familiar with the proof has ever considered what “truth” really is. That’s…well, there’s a saying amongst physicists that, “you’re not even wrong.” The semantics of language and math have a copious amount of literature behind them. Not to mention that even asking the question is, forgive me, a tad juvenile.
Also, recursively applying known unknowns back into the statement (? If I understood that correctly) is itself incomplete: how could a system be “complete” if there are unknowns?
Forgive me if it seems I, too, have ventured into the cranky side of the discourse.
A lot of people find too much free time distressing. Even with a lot of projects to pursue, or come up with, it can be difficult to focus on any particular one for the simple fact that the mind can be preoccupied with the constant weighing of opportunity costs, and become stagnant (only now with even more anxiety about opportunity being wasted).
Sometimes people reveal things about themselves because they’re looking for the validation only others commiserating can bring. Unfortunately this often invites needless criticism.