Sure, a computer or an LLM isn't alive, but we have no idea if "being alive" is what is required for conscious experience.
The only argument I have for believing that other human beings experience things is that it would be extremely improbable if I was the only one, and the other mechanistic automatons looked and talked like me but didn't experience like me. I can see that humans are animals, so the common origin of animals and our cognitive and behavioral similarities give us good reason to believe that other complex animals experience things, though possibly radically differently.
None of that gives us any clue what the necessary and sufficient conditions for conscious experience are, so it doesn't give us any clue whether a computer or a running LLM instance would experience its existence.
But to "experience its [own] existence", it needs to have a model of its own internals, observe, improve itself and perhaps preserve its own "values" and integrity. I do wonder what kind of values are needed for intelligent autonomous systems, that they can justify by and for themselves, even in the absense of human beings or presence of other intelligent agents.
I find (human) languages to be inefficient media to store and perform operations from the perspective of an AGI. Feeding lots of text samples to develop logical reasoning abilities, such extravagance I can not accept. Even more so trying to emulate neural networks, which I understand to be naturally analog entities, in digital manner. Can we expect any gain in power efficiency or correctness gains when using analog computers for this purpose? I wonder what we will get to see with analog computers for neural networks, with proper human-language-independent knowledge representation and well developed global (as in being able to decide which way to reason, given its limitations, for efficiency) logical reasoning capabilities, developed by itself from a reasonable basis of principles, that it can justify for itself and avoid the usual and unusual paradoxes. What core set of principles would be sufficient for emerging, evolving or developing into a proficient general intelligent being, when sufficient resources would be available to it? Like "ancestor" microbes evolving into human beings in hundreds of millions of years, but wayyyyy faster and more efficient?
I came across narcissism. The idea that you’re smarter than everyone else. Comes from a grandiose sense of self importance. But the truth is most people are smarter than you in some ways and less smart in others, but you’re unable to see it because you’re in this black and white mode where preserving your ego relies on you being the smart guy amongst the idiots.
It’s very common in tech to see this. Maybe because we were all exceptional at maths when we were young and got the idea that meant we were super smart and this compensated for our nerdiness.
I worked with a bunch of physicists and every single one of them was smarter than me at maths and physics, I wasn’t even close. But they sometimes talked about politics and current affairs, which I’m very well read in. I didn’t say anything, but I was shocked at how little they knew and how overconfident they were.
None of those folks were narcissists, thankfully they were lovely people, but for sure it highlighted how poor people were at judging their own expertise in an area.
It’s so easy to dismiss people, criticising is easy, and so hard to see just how stupid you can be yourself.