Readit News logoReadit News
debugnik · 3 months ago
This is cool! After the removal of all 32-bit targets from the native OCaml compiler, I had grown concerned about the portability of OCaml beyond the interpreter and Jsoo. But between this and the wasm efforts I see there are more escape hatches than I thought.

Consider sharing this in discuss.ocaml.org as well!

PaulHoule · 3 months ago
The CPU is an eZ80 clocked at 48MHz

https://en.wikipedia.org/wiki/Zilog_eZ80#Use_in_commercial_p...

which uses 24 bit pointers for a 16M address space. I like it as it is clearly superior to the '8-bit' machines of the 1980s but a step below ARM.

kragen · 3 months ago
Below ARM in what metric? It's not below ARM in cost, power consumption, or transistor count. It's below ARM in aesthetic appeal and convenience, certainly, but why would you want that?
adrian_b · 3 months ago
Below especially in speed and in ease of programming.

A 48-MHz Z80 will be many times slower than a 48-MHz Cortex-M0+, which also costs only a fraction of a dollar.

If you want a decent speed on Z80, programming becomes difficult. The difference in execution time between naive Z80 programs for multiplying and dividing 32-bit numbers or 64-bit numbers and optimized programs can be of more than an order of magnitude.

Even the commercial programs that have existed during the 8080/Z80 era e.g. the Microsoft BASIC interpreters and FORTRAN compilers, or Turbo Pascal, were very far from obtaining optimal performance on Z80. Replacing their runtime libraries could gain a lot of performance. However that required a lot of work. On an ARM microcontroller nobody cares about implementing elementary arithmetic operations, because they are handled by hardware.

The difference in cost and transistor count could be relevant only for someone who uses millions of such chips. As an individual you do not care whether a CPU costs ten cents more or ten cents less.

The difference in power consumption is not at all guaranteed, because the energy required for the execution of a long software procedure that does e.g. a multiplication on a Z80 is likely to be greater than the energy consumed by a hardware multiplier. Similarly for any other complex operations.

A Z80 has less gates, but they must be reused many times for completing the same work that an ARM MCU does using more gates, but which are used much less times.

Moreover, a Z80 is likely to have been manufactured in an older CMOS process, which consumes more energy per gate switching.

So, except for very simple programs, it is more likely that a 48-MHz Z80 will consume more energy than a 48-MHz Cortex-M0+, for completing the same task, and not less.

Many decades ago, I was able to read the hex dump of a Z80 executable program as easy as someone would read today a C or Python source program. Despite that, I have no interest to program again a Z80, even if it provides pleasant memories.

Writing programs for instruction sets like AVX-512 or Armv9-A is much more interesting than reimplementing elementary operations on a Z80. With a good macroassembler, one could implement an instruction set for a virtual CPU and then use that to simplify the programming of a Z80, but then what is the point of using the Z80 instead of a CPU with more powerful hardware.

If you are insulated from the Z80 ISA by using a high-level language, e.g. BASIC, FORTRAN or C, then again you no longer have any reason to use a Z80 instead of another better CPU.

The only purpose of a Z80 today is to run unmodified legacy software, but even for that, a software cycle-accurate emulator is better.