Tasks that are easiest to estimate are tasks that are predictable, and repetitive. If I ask you how long it'll take to add a new database field, and you've added a new database field 100s of times in the past and each time they take 1 day, your estimate for it is going to be very spot-on.
But in the software world, predictable and repetitive tasks are also the kinds of tasks that are most easily automated, which means the time it takes to perform those tasks should asymptotically approach 0.
But if the predictable tasks take 0 time, how long a project takes will be dominated by the novel, unpredictable parts.
That's why software estimates are very hard to do.
Making mistakes over and over again. And adding a lot of time buffers just to be safe
You don't need operand size prefix 0x66 when running 16 bit code in Real Mode. So "mov ax, 0" is 3 bytes and "xor ax, ax" is just 2 bytes.
It makes much more sense: resetting ax, and bc (xor ax,ax ; xor bx,bx) will be 4 octets, DWORD aligned, and a bit faster to fetch by the x86 than the 3-octet version I wrote before.
If you decode the instruction, it makes sense to use XOR:
- mov ax, 0 - needs 4 bytes (66 b8 00 00) - xor ax,ax - needs 3 bytes (66 31 c0)
This extra byte in a machine with less than 1 Megabyte of memory did id matter.
In 386 processors it was also - mov eax,0 - needs 5 bytes (b8 00 00 00 00) - xor eax,eax - needs 2 bytes (31 c0)
Here Intel made the decision to use only 2 bytes. I bet this helps both the instruction decoder and (of course) saves more memory than the old 8086 instruction.
It is extremely useful if your work ends up in paper. For photography (edit: film and broadcast, too) would be great.
My use case are comics and illustration, so a self-color-correcting cintiq or tablet would be great for me.
... the rest is history.
They own the nework, they will better know than anyone how to tap it.