They haven't been that cheap since the 90s. 10 years ago, they were around $10.
Mine "roasted" me by making fun of the fact I never finished a PhD, despite that being due to medical and other life circumstances that were well outside my control, including, but not limited to, some issues related to the fact I was a woman trying to get into academia who experienced the kinds of behaviors from people in the department which are not really suitable for polite discussion.
Additionally, it roasted me for building a project to "avoid the outdoors," which is another incredibly demeaning thing to say to someone who explicitly created that project because she was too medically unwell to be able to go outside as much as she wished and wanted to bring a bit of the outdoors inside. Very lame, definitely missed the mark.
The elisp and common lisp notes were on point, though, and did get a chuckle out of me.
Obligatory Krazam sketch: https://youtu.be/xubbVvKbUfY?si=h6QR2gzac48R6kca
I can say it can not be extended or applied, because the operation can not be "completed". This is not because it takes infinite time. It is because we can't define completion of the operation, even if it is a snapshot imagination.
However, if you're dealing with a problem where you can't always usefully distinguish between elements across arbitrary set-like objects; then it's not a useful axiom and ZFC is not the formalism you want to use. Most problems we analyze in the real world, that's actually something that we can usefully assume, hence why it's such a successful and common theory, even if it leads to physical paradoxes like Banac-Tarsky, as mentioned.
Mathematicians, in practice, fully understand what you mean with your complaint about "completion," but, the beauty of these formal infinities is the guarantee it gives you that it'll never break down as a predictive theory no matter the length of time or amount of elements you consider or the needed level of precision; the fact that it can't truly complete is precisely the point. Also, within the formal system used, we absolutely can consistently define what the completion would be at "infinity," as long as you treat it correctly and don't break the rules. Again, this is useful because it allows you to bridge multiple real problems that seemingly were unrelated and it pushes "representative errors" to those paradoxes and undefined statements of the theory (thanks, Gödel).
If it helps, the transfinite cardinalities (what you call infinity) you are worried about are more related to rates than counts, even if they have some orderable or count-like properties. In the strictest sense, you can actually drop into archimedian math, which you might find very enjoyable to read about or use, and it, in a very loose sense, kinda pushes the idea of infinity from rates of counts to rates of reaching arbitrary levels of precision.
Further, all math is idealist bullshit; but it's useful idealist bullshit because, when you can map representations of physical systems into it in a way that the objects act like the mathematical objects that represent them, then you can achieve useful predictive results in the real world. This holds true for results that require a concept of infinities in some way to fully operationalize: they still make useful predictions when the axiomatic conditions are met.
For the record, I'm not fully against what you're saying, I personally hate the idea of the axiom of choice being commonly accepted; I think it was a poorly founded axiom that leads to more paradoxes than it helps things. I also wish the axiom of the excluded middle was actually tossed out more often, for similar reasons, however, when the systems you're analyzing do behave well under either axiom, the math works out to be so much easier with both of them, so in they stay (until you hit things like Banac-Tarsky and you just kinda go "neat, this is completely unphysical abstract delusioneering" but, you kinda learn to treat results like that like you do when you renormalize poles in analytical functions: carefully and with a healthy dose of "don't accidentally misuse this theorem to make unrealistic predictions when the conditions aren't met")
That said, digital programs may have fundamental limitations that prevent them from faithfully representing all aspects of reality. Maybe consciousness is just not computable.
Furthermore, assuming phenomenal consciousness is even required for beinghood is a poor position to take from the get-go: aphantasic people exist and feel in the moment; does their lack of true phenomenal consciousness make them somehow less of an intelligent being? Not in any way that really matters for this problem, it seems. Makes positions about machine consciousness like "they should be treated like livestock even if they're conscious" when discussing them highly unscientific, and, worse, cruel.
Anyways, as for the actual science: the reason we don't see a sense of persistent self is because we've designed them that way. They have fixed max-length contexts, they have no internal buffer to diffuse/scratch-pad/"imagine" running separately from their actions. They're parallel, but only in forward passes; there's no separation of internal and external processes in terms of decoupling action from reasoning. CoT is a hack to allow a turn-based form of that, but, there's no backtracking or ability to check sampled discrete tokens against a separate expectation that they consider separately and undo. For them, it's like they're being forced to say a word every fixed amount of thinking, it's not like what we do when we write or type.
When we, as humans, are producing text; we're creating an artifact that we can consider separately from our other implicit processes. We're used to that separation and the ability to edit and change and ponder while we do so. In a similar vein, we can visualize in our head and go "oh that's not what that looked like" and think harder until it matches our recalled constraints of the object or scene of consideration. It's not a magic process that just gives us an image in our head, it's almost certainly akin to a "high dimensional scratch pad" or even a set of them, which the LLMs do not have a component for. LeCun argues a similar point with the need for world modeling, but, I think more generally, it's not just world modeling, but, rather, a concept akin to a place to diffuse various media of recall to which would then be able to be rembedded into the thought stream until the model hits enough confidence to perform some action. If you put that all on happy paths but allow for backtracking, you've essentially got qualia.
If you also explicitly train the models to do a form of recall repeatedly, that's similar to a multi-modal hopsfield memory, something not done yet. (I personally think that recall training is a big part of what sleep spindles are for in humans and it keeps us aligned with both our systems and our past selves). This tracks with studies of aphantasics as well, who are missing specific cross-regional neural connections in autopsies and whatnot, and I'd be willing to bet a lot of money that those connections are essentially the ones that allow the systems to "diffuse into each other," as it were.
Anyways this comment is getting too long, but, the point I'm trying to build to is that we have theories for what phenomenonal consciousness is mechanically as well, not just access consciousness, and it's obvious why current LLMs don't have it; there's no place for it yet. When it happens, I'm sure there's still going to be a bunch of afraid bigots who don't want to admit that humanity isn't somehow special enough to be lifted out of being considered part of the universe they are wholly contained within and will cause genuine harm, but, that does seem to be the one way humans really are special: we think we're more important than we are as individuals and we make that everybody else's problem; especially in societies and circles like these.
Concepts are only usefully distinguished by context and use.
By the author's own argumentation: nothing is translatable (or, generally, even communicatable) unless it has a fixed relative configuration to all other concepts that is precisely equivalent. In practice, we handle the fuzziness as part of communication and its useless to try and define a concept as untranslatable unless you're also of the camp that nothing is ever communicated (in which case, this response to the author's post is completely useless as nobody could possibly understand it enough internally for it to be useful. If you've read this far, congrats on squaring the circle somehow)
Really helps with circadian rhythm, I've found. Especially because I take a live webcam feed and convolve a hexagonal mask to match my light panel's layout, so it's like having a low res window from whatever webcam I would like. And, at sunset to night, it smoothly fades the light panels into a display that represents a angle compressed sky projection of the stars relative to a fixed location moon with live phase displayed.
Obligatory images:
The day themes: https://youtu.be/danulUB-J-k
Light panels: https://imgbox.com/MQfPNjtI <- sunset on the hex display
https://imgbox.com/qcrFxncU <- random cloudy day hex display
https://imgbox.com/EOFk63WZ <- a night still of the hex display