They most definitely don't. We attach symbolic meaning to their output because we can map it semantically to the input we gave it. Which is why people are often caught by surprise when these mappings break down.
LLMs can emulate reasoning, but the failure modes show that they don't. We can get them to be coincidentally emulating reasoning well enough long enough to fools us, investors and the media. But doubling down on it hoping that this problem goes away with scale or fine tuning is proving more and more reckless.
Oh! And also, moving within the lane is sometimes important for getting a better look at what's up ahead or behind you or expressing car "body language" that allows others to know you're probably going to change lanes soon.
I commute mainly on the highway about 45-1hr each way every day and it makes a big difference for driver fatigue. I was honestly a bit surprised. Even though, I'm steering, it requires less effort. I don't have my foot on the gas and I'm not having to adjust my speed constantly.
Critically, though, I do have to pay attention to my surroundings. It's not taking so much out of my driving that I can't stay engaged to what's happening around me.