> Reasoning and consciousness are seperate(sic) concepts
No, they're not. But, in tech, we seem to have a culture of severing the humanities for utilitarian purposes, but no, classical reasoning uses consciousness and awareness as elements of processing.
It's only meaningless if you don't know what the philosophical or epistemological definitions of reasoning are. Which is to say, you don't know what reasoning is. So you'd think it was a meaningless statement.
Do computers think, or do they compute?
Is that a meaningless question to you? I'm sure given your position it's irrelevant and meaningless, surely.
And this sort of thinking is why we have people claiming software can think and reason.
> "classical reasoning uses consciousness and awareness as elements of processing"
They are not the _same_ concept then.
> It's only meaningless if you don't know what the philosophical or epistemological definitions of reasoning are. Which is to say, you don't know what reasoning is. So you'd think it was a meaningless statement.
The problem is the only information we have is internal. So we may claim those things exist in us. But we have no way to establish if they are happening in another person, let alone in a computer.
> Do computers think, or do they compute?
Do humans think? How do you tell?
The examples are from all the major commercial American LLMs as listed in a sister comment.
You seem to conflate context windows with tracking chess pieces. The context windows are more than large enough to remember 10 moves. The model should either track the pieces, or mention that it would be playing blindfold chess absent a board to look at and it isn't good at this, so could you please list the position after every move to make it fair, or it doesn't know what it's doing; it's demonstrably the latter.
https://arxiv.org/abs/2501.17186
PS "Major commercial American LLM" is not very meaningful, you could be using GPT4o with that description.