This would be interesting, but it looks like it has been barely started - the CPU is barely what I would call a CPU, let alone a Lisp interpreter.
Verilog isn't a programming language (it tries to be, unfortunately). For synthesis, it is a hardware description language. Someday I'll write up some decent Verilog tutorials because there aren't any good ones on the Internet.
My thoughts exactly. When I saw "Perhaps the Verilog language is not so good, because some nice standard language featuers (forever-loop etc.) are missing in the Xilinx-Tools.", I had flashbacks to VHDL class, students writing loops and then wondering why they couldn't synthesize the code. You have to get yourself into a hardware mindset, thinking about clocks and state machines and enable lines rather than infinite loops.
Yeah, it looks like this guy needs to learn more Verilog before he can make progress on the CPU implementation.
It reminds of teaching some VHDL in a hackerspace a few years ago (I had learned it in college some years previously), and this software guy was constantly trying to write functions and loops, and was having trouble with the concept that all signals were propagated concurrently rather than sequentially!
Sure, it looks like code (in fact, VHDL's syntax is purposefully similar to ADA), but it sure isn't code.
I took a look and I don't know where to start. The table of content is completely different from how I would introduce the concepts. The site says "This book requires that you first read Digital Circuits", which is a good point, so maybe most of the design/Verilog content should go there instead.
Unfortunately, the digital circuit book also doesn't have a modern table of content: SR flip-flops and 7400TLS are not exactly related to Verilog much at all.
Is there a place where such issues can be discussed?
Please do! I've wanted to learn FPGAs for years now, never really got around to it because it's really so niche there's not that much quality tutorials.
You need to look at The Architecture of Symbolic Machines by Peter Kogge. It contains a machine you can implement. I have implemented it in the Jatha LISP interpreter ( http://sourceforge.net/projects/jatha/ ).
I'd like to point out that the code he has on there isn't a Lisp CPU and I don't think will even work in a real FPGA. I don't know Verilog, just VHDL, but that business in the INIT state looks sketchy to say the least. Even if it does work, it just loads a few instructions into memory & executes them; these instructions blink an LED on the board. The Lisp architecture is barely described beyond a few sparse notes about a tagged memory arch (not implemented).
I'm not attacking the original author; these look like personal notes as he explores an FPGA and realizes that hardware design is complicated. But don't get your hopes up on this being... well, anything.
yeah it's definitely not synthesizable in its current state, not to mention that he's going to have a bad time if he messes with the clock the way he does in this code (the way he generates his slowClock, unless his tools are clever enough to recognize this pattern and do the right thing).
It's a cool project though, but if I were him I'd learn the language with some simpler and smaller designs, maybe some stuff he'll be able to re-use in his final CPU.
As it is it reads a bit like someone who wants to write a kernel in C while not understanding how pointers work.
This was started back in 2004 and the last update was in 2007 apparently [1], not a lot of the content changed. Building CPUs in FPGAs is fun, a lot of the demo boards available these days already have memory and often either an SD card type interface or some sort of NAND flash attached. A good place to start if your interested is fpgacpu.org ( not exactly a 'live' site but there is good info in there ) or opencores.org.
I'd like to see a modern CPU that handles dynamic typing in hardware. Registers can store values as well as the type, e.g. 32 value bits and a few type bits. Basic arithmetic like addition can automatically compare the type bits and just do the right thing with a single instruction (fixnum add, float add, bignum add, etc.).
Not quite dynamic typing, but the Mill CPU stores a lot of metadata in-CPU (which is lost when written to memory)- things like size, vector width, validity of the value (as opposed to faulting immediately) or ALU status bits (overflow, zero). The particular interpretation of the bits is still left up to the opcode used, but it does enable some neat tricks that vaguely remind me of dynamic typing.
That reminds me SOAR[1] (from 1984): Smalltalk on a RISC
Smalltalk on a RISC (SOAR) is a simple, Von Neumann computer that is designed to execute the Smalltalk-80 system much faster than existing VLSI microcomputers. The Smalltalk-80 system is a highly productive programming environment but poses tough challenges for implementors: dynamic data typing, a high level instruction set, frequent and expensive procedure calls, and object-oriented storage management. SOAR compiles programs to a low level, efficient instruction set. Parallel tag checks permit high performance for the simple common cases and cause traps to software routines for the complex cases. Parallel register initialization and multiple on-chip register windows speed procedure calls. Sophisticated software techniques relieve the hardware of the burden of managing objects. We have initial evaluations of the effectiveness of the SOAR architecture by compiling and simulating benchmarks, and will prove SOAR's feasibility by fabricating a 35,000-transistor SOAR chip. These early results suggest that a Reduced Instruction Set Computer can provide high performance in an exploratory programming environment.
I got part way through building type checking hardware to use with Franz Lisp on 68k CPUs. Franz Lisp allocated objects of a single type in each page and the 68k had function code pins that meant you could tell whether a bus read was for data or instruction fetch, the idea was that I would modify the compiler to read a latched type value just after a Lisp object had been read into a register.
As I recall (and I may have some of this wrong) the Symbolics 3600 machines used a few extra bits to tag types like 32bit_int, 32bit_float, and pointer_to_object. And when operating on integers, the cpu would assume it had 32bit_ints and would raise a low-level signal to interrupt the operation if the operands were something else (like bignums).
There were also extra bits in each word for cdr-coding lists, which was a way to internally implement a list as an array.
There might have also been an extra bit for GC, maybe for mark-and-sweep.
Dynamic typing in hardware seems like one of the best kept secrets. It's probably easier for a software guy to think of adding dynamic typing in hardware than it does a hardware guy to want to program in a dynamic language.
It sounds feasible, but I believe you would also have to augment data values in memory with this type. Every 32 bits in memory would also need a few bits associated with it for type and the memory buses would need to be widened to move these around.
You could instead have the tag bits inside the values in memory. Have the CPU treat the data in memory as though it was 60 bits wide with 4 additional tag bits instead of 64, or something along those lines. With special instructions that ignored tag metadata for those cases where you needed the full width (mainly interoperability).
Verilog isn't a programming language (it tries to be, unfortunately). For synthesis, it is a hardware description language. Someday I'll write up some decent Verilog tutorials because there aren't any good ones on the Internet.
It reminds of teaching some VHDL in a hackerspace a few years ago (I had learned it in college some years previously), and this software guy was constantly trying to write functions and loops, and was having trouble with the concept that all signals were propagated concurrently rather than sequentially!
Sure, it looks like code (in fact, VHDL's syntax is purposefully similar to ADA), but it sure isn't code.
Unfortunately, the digital circuit book also doesn't have a modern table of content: SR flip-flops and 7400TLS are not exactly related to Verilog much at all.
Is there a place where such issues can be discussed?
EDIT: I started https://en.wikibooks.org/wiki/Programmable_Logic/Verilog_for...
I'm not attacking the original author; these look like personal notes as he explores an FPGA and realizes that hardware design is complicated. But don't get your hopes up on this being... well, anything.
It's a cool project though, but if I were him I'd learn the language with some simpler and smaller designs, maybe some stuff he'll be able to re-use in his final CPU.
As it is it reads a bit like someone who wants to write a kernel in C while not understanding how pointers work.
[1] http://web.archive.org/web/*/http://www.frank-buss.de/lispcp...
LispmFPGA (2008) http://www.aviduratas.de/lisp/lispmfpga/
https://groups.google.com/forum/?fromgroups=#!topic/comp.lan...
IGOR (2008) http://opencores.org/project,igor
https://www.flickr.com/photos/kaitorge/sets/7215760944571932...
Would this be cool or am I dreaming?
http://millcomputing.com/topic/metadata/
Smalltalk on a RISC (SOAR) is a simple, Von Neumann computer that is designed to execute the Smalltalk-80 system much faster than existing VLSI microcomputers. The Smalltalk-80 system is a highly productive programming environment but poses tough challenges for implementors: dynamic data typing, a high level instruction set, frequent and expensive procedure calls, and object-oriented storage management. SOAR compiles programs to a low level, efficient instruction set. Parallel tag checks permit high performance for the simple common cases and cause traps to software routines for the complex cases. Parallel register initialization and multiple on-chip register windows speed procedure calls. Sophisticated software techniques relieve the hardware of the burden of managing objects. We have initial evaluations of the effectiveness of the SOAR architecture by compiling and simulating benchmarks, and will prove SOAR's feasibility by fabricating a 35,000-transistor SOAR chip. These early results suggest that a Reduced Instruction Set Computer can provide high performance in an exploratory programming environment.
[1] http://dl.acm.org/citation.cfm?id=808182
Deleted Comment
I got part way through building type checking hardware to use with Franz Lisp on 68k CPUs. Franz Lisp allocated objects of a single type in each page and the 68k had function code pins that meant you could tell whether a bus read was for data or instruction fetch, the idea was that I would modify the compiler to read a latched type value just after a Lisp object had been read into a register.
[1] http://fare.tunes.org/tmp/emergent/kmachine.htm
There were also extra bits in each word for cdr-coding lists, which was a way to internally implement a list as an array.
There might have also been an extra bit for GC, maybe for mark-and-sweep.
[1] http://dspace.mit.edu/handle/1721.1/6334