"runs directly on embedded hardware"
https://www.raspberrypi.com/documentation/microcontrollers/m...
I don't understand why they have the need to do this...
"runs directly on embedded hardware"
https://www.raspberrypi.com/documentation/microcontrollers/m...
I don't understand why they have the need to do this...
Still cool, but I would definitely ease back the first claim.
I was going to say it does make me wonder how much a pain a direct processor like this would be in terms of having to constantly update it to adapt to the new syntax/semantics everytime there's a new release.
Also - are there any processors made to mimic ASTs directly? I figure a Lisp machine does something like that, but not quite... Though I've never even thought to look at how that worked on the hardware side.
EDIT: I'm not sure AST is the correct concept, exactly, but something akin to that... Like building a physical structure of the tree and process it like an interpreter would. I think something like that would require like a real-time self-programming FPGA?
The system compiles Python source to CPython ByteCode, and then from ByteCode to a hardware-friendly instruction set. Since it builds on ByteCode—not raw syntax—it’s largely insulated from most language-level changes. The ByteCode spec evolves slowly, and updates typically mean handling a few new opcodes in the compiler, not reworking the hardware.
Long-term, the hardware ISA is designed to remain fixed, with most future updates handled entirely in the toolchain. That separation ensures PyXL can evolve with Python without needing silicon changes.
Reading further down the page it says you have to compile the python code using CPython, then generate binary code for its custom ISA. That's neat, but it doesn't "execute python directly" - it runs compiled binaries just like any other CPU. You'd use the same process to compile for x86, for example. It certainly doesn't "take regular python code and run it in silicon" as claimed.
A more realistic claim would be "A processor with a custom architecture designed to support python".
What gets executed is a direct mapping of Python semantics to hardware. In that sense, this is more “direct” than most systems running Python.
This phrasing is about conveying the architectural distinction: Python logic executed natively in hardware, not interpreted in software.
It's still early days and there’s a lot more work ahead, but I'm very excited about the possibilities.
I definitely see areas like embedded ML and TinyML as a natural fit — Python execution on low-power devices opens up a lot of doors that weren't practical before.
You're right that it can definitely be faster — there's real room for optimization.
When I have time, I may write a blog post that will explain where the cycles go, why it's different from raw assembler toggling, and how it could be improved.
Also, just to keep things in perspective — don't forget to compare apples to apples: On a Pyboard running MicroPython, a simple GPIO roundtrip takes about 14 microseconds. PyXL is already achieving 480 nanoseconds, so it’s a very different baseline.
Thanks for raising it — it's a very good point.
Do you plan to have AMBA or Wishbone Bus support?
PyXL already communicates with the ARM side over AXI today (Zynq platform).
Congratulations!!
> Runs a subset of Python
What's the advantage of using a new custom toolchain, custom instruction set and custom processor over existing tools that compile a subset of Python for existing CPUs? - e.g. Cython, Nuitka etc?
Just to name a few limitations:
- Many rely heavily on the CPython runtime, meaning garbage collection, interoperability, and object semantics are still governed by CPython’s model.
- They’re rarely designed with embedded or real-time use cases in mind: large binaries, non-deterministic execution (due to the underlying architecture or GC behavior), and limited control over timing.
If these solutions were truly turnkey and broadly capable, CPython wouldn't still dominate—and there’d be no reason for MicroPython to exist either.