It's a miracle emm386 managed to squeeze out more memory as UMBs than it needed to do its job.
Consider: It scans for holes in the memory map. It switches to protected mode, then v86 mode, sets up page tables, and is basically more of an OS than DOS, if you look at the CPU. It monitors IO ports and rewrites DMA access from random programs to translate the UMBs to their physical addresses. It implements VCPI to cooperate with e.g. windows and dos4gw. It switches A20 on and off when translation needs to happen.
And all that to gain memory from segments C000,D000,E000, if nothing else sits there. So 192K max.
Looking back, it's insane what contortions we went to, just to keep dos alive.
It's a shame that VM86 mode wasn't able to be nested. I don't remember the specifics, but as I recall there are instructions that don't trap and prevent nesting VM86 within itself. I know Intel rectified this in later CPUs-- I believe that's the VT extensions.
This is why I've suggested an inverse EMM386 to the FreeDOS team: a 32-bit app that has a single purpose -- start 1 DOS VM.
That is the only way I can imagine to create a form of DOS that could be booted on UEFI computers, and without that, soon FreeDOS will be confined to VMs and old kit.
The idea is, start a very very simple 32-bit binary that creates 1 V86-mode VM, maps some DOS resources into it (disk, screen, keyboard, mouse, etc.), then loads the DOS kernel into it.
In DOS a stub EMM386.EXE could create some UMBs and LIM EMS if you want it.
I doubt it will ever be possible to support sound cards, network cards, etc. this way but most modern ones don't work on DOS anyway so nothing of importance is lost.
The big achievement of DOS and the 8088 was relocation during loading. For .COM files, this was easy: the program was segment register relative, you pretty much had virtual memory. For .EXE files, MS-DOS relinks all of the absolute segment addresses during the load process.
So this means you could have multiple programs loaded at once: often done with TSRs (terminate and stay resident programs), but it also allowed things like shell windows in Lugaru Emacs (Epsilon) to exist. Languages like Turbo-C had system().
Also the OS was well decoupled from the application programs (it could be independently updated). This was not true for many of the other home computers of the time, but some pulled it off: all CP/M systems and TRS-80. I think the main exception to this was the video system- many programs made assumptions about the address of the screen buffer.
If I remember correctly, even with a color card, applications switching to a monochrome video mode was possible if a bit rare. Creating UMBs in vram segments A000, B000 or B800 was always considered a suicidal configuration, and EMM386 would not do it unless forced.
a.k.a. Intel, who also designed USB "Full Speed" vs "High Speed"
("Full Speed" was the maximum speed of USB 1.0, 12 Mb/s, which also had "Low Speed", 1.5 Mb/s which is used by keyboards and mice. "High Speed" was introduced in USB 2.0, and is 480 Mb/s)
Joke's aside, I have empathy for the pain of naming in protocols and systems that get extended and remain backwards compatible. Intel was the king at this.
The article mentions those at the bottom: "And that’s it for today folks. I intentionally did not touch on DPMI—the technology that truly allowed DOS applications to break free from the 1 MB memory limitation—because I’m saving that for the next article. So, make sure to come back for more!"
[Original author here.] Exactly! That (DPMI) is the thing I originally wanted to talk about but I realized I had to clarify many more thoughts first! And I still have to do the research on DPMI to be able to talk about it :)
DOS/4GW was memorable because it advertised itself while loading games made with it, usually late 386/486 era 32-bit games from before gaming on Windows had its act together.
Phar Lap was more ubiquitous throughout the 286 and 386 DOS days. Microsoft distributed a light version of it with their compiler for a while.
They also had DOS/16M which targeted 286 protected mode. The company I worked for used it when moving the Clipper programming language to protected mode.
Consider: It scans for holes in the memory map. It switches to protected mode, then v86 mode, sets up page tables, and is basically more of an OS than DOS, if you look at the CPU. It monitors IO ports and rewrites DMA access from random programs to translate the UMBs to their physical addresses. It implements VCPI to cooperate with e.g. windows and dos4gw. It switches A20 on and off when translation needs to happen.
And all that to gain memory from segments C000,D000,E000, if nothing else sits there. So 192K max.
Looking back, it's insane what contortions we went to, just to keep dos alive.
Windows/386 and the DOS-based Windows up to Me were similarly hypervisors for DOS.
True.
This is why I've suggested an inverse EMM386 to the FreeDOS team: a 32-bit app that has a single purpose -- start 1 DOS VM.
That is the only way I can imagine to create a form of DOS that could be booted on UEFI computers, and without that, soon FreeDOS will be confined to VMs and old kit.
The idea is, start a very very simple 32-bit binary that creates 1 V86-mode VM, maps some DOS resources into it (disk, screen, keyboard, mouse, etc.), then loads the DOS kernel into it.
In DOS a stub EMM386.EXE could create some UMBs and LIM EMS if you want it.
I doubt it will ever be possible to support sound cards, network cards, etc. this way but most modern ones don't work on DOS anyway so nothing of importance is lost.
[0] https://en.wikipedia.org/wiki/DoubleDOS
https://www.digitalmars.com/ctg/handle-pointers.html
There was also a VCM system that swapped code to/from disk in a manner that was very much like virtual paging.
https://www.digitalmars.com/ctg/vcm.html
So this means you could have multiple programs loaded at once: often done with TSRs (terminate and stay resident programs), but it also allowed things like shell windows in Lugaru Emacs (Epsilon) to exist. Languages like Turbo-C had system().
Also the OS was well decoupled from the application programs (it could be independently updated). This was not true for many of the other home computers of the time, but some pulled it off: all CP/M systems and TRS-80. I think the main exception to this was the video system- many programs made assumptions about the address of the screen buffer.
I remember a debugger that gave you dual screen on a 8086 by using the monochrome for text/debugging whilst the color screen was for the program.
If I see the man page http://www.manmrk.net/tutorials/DOS/help/emm386.exe.htm the vram segments are not allowed for the FRAME= and Mx options, rang A000-BFFF is forbidden.
You had developers using it as a power feature.
a.k.a. Intel, who also designed USB "Full Speed" vs "High Speed"
("Full Speed" was the maximum speed of USB 1.0, 12 Mb/s, which also had "Low Speed", 1.5 Mb/s which is used by keyboards and mice. "High Speed" was introduced in USB 2.0, and is 480 Mb/s)
Joke's aside, I have empathy for the pain of naming in protocols and systems that get extended and remain backwards compatible. Intel was the king at this.
My other pet peeve about Intel is, why the heck would they make a paragraph just 16 bytes ? Why not 256 ?
An impressive suite was the stuff from QuarterDeck. Their DOS extender was called QDPMI, and used the QEMM memory manager.
Quarterdeck DESQview, built on QEMM, truly extended DOS by turning it into a kind of multi-tasking system, allowing users to be more productive.
It was known for its high application compatibility.
I used DJGPP in the early 1990's, and the GO32.EXE extender, which did the job.
That company produced a number of excellent DOS survival tools. QEMM memory manager was an absolute must to get games working.
Phar Lap was more ubiquitous throughout the 286 and 386 DOS days. Microsoft distributed a light version of it with their compiler for a while.
I'm still trying to find the first early extenders though from 1987/88 no leads though.
[1] https://en.wikipedia.org/wiki/Helix_Netroom