Deleted Comment
Microprocessors are just the CPU, usually with the system bus wired out. Microcontrollers are combined systems, which include a processor and usually some kind of firmware storage and RAM. You can usually differentiate them by whether their pins are address/data lines or gpio-like.
> microprocessors have a memory management unit
The 8086 didn't. Having an MMU seems unrelated to the MCU/MPU distinction...
At least, this was my knowledge until now. Is this like "crypto" where the meaning changed while i was not paying attention?
Microprocessors going back to the Intel 486 have more on-chip RAM than many microcontrollers sold today have. CPUs have built-in firmware ROM for loading boot code. Also, CPUs haven't had a system bus with address/data pins for like 30 years — somewhat ironically, the only chips that have that are microcontrollers. Northbridges haven't existed for 15 years, and southbridges are basically just PCIe-to-USB/SATA/Ethernet/I2C adapters. And many mobile CPUs that Intel and AMD make have on-chip I2C, SPI, UART, GPIO peripherals.
In my article, I'm trying to differentiate between processors that can run a modern multiuser operating system like Linux and microcontrollers that runs bare-metal code, since that's a question I get asked a ton. After carefully considering all the options, the MMU seemed like the most obvious place to draw the line. Every CPU in the last 40 years (since the Intel 286) has had an MMU, while no microcontroller ever made has had one. I'm not sure why that seems so unreasonable to you.
Zooming out to a broader management context, non-trivial systems usually comprise multiple processors. In these circumstances production volumes are generally lower and subsystem iteration more likely, therefore it makes more sense for managers to worry about overall architecture (eg. bus selection, microntroller vs application processor, application processor architecture) and consider factors such as speed of prototyping, availability of talent, longevity/market-stability of vendor's platform, feature-set and nature of software toolchain(s), etc. rather than worry about optimizing specific processor part selection.
The management priority generally only flips closer to the article's fine-grained approach in the event that you are designing a consumer electronics widget or otherwise very high production run product with a low change expectation, where individual processor selection and BOM optimization become greater concerns.
Conclusion: For most people, for most products, most of the time, you don't want to start by worrying about specific processor part selection. Further, prototyping should be done on third party dev boards/SOMs and when production volumes justify it, final PCB design can be handed to an outside EE. This is not always a viable strategy (eg. due to form factor or other application constraints), but it's fast, cheap and hard to go too far wrong.
The only thing that SOMs provide is a processor + DRAM + PMIC. If you practice and become proficient at designing around application processors, it should take you no longer than 3-4 hours to get this component of the system (the processor, DRAM, and PMIC) laid out when working with these entry-level parts.
SOMs aren't some magical remedy to all the problems. It's still up to you to design the actual system, which takes hundreds of hours. The difference between using a SOM or a raw-chip design is negligible at this point.
I have no problem prototyping on EVKs --- in fact, I link to EVKs for each platform in my review. But a lot of these evaluation boards are pretty crummy to prototype with; some don't have all the pins of the CPU brought out, others use proprietary connectors that are a hassle to adapt to your hardware. You shouldn't be afraid to spend an 8-hour day designing a little breakout board for a part if you're interested in using it in a product that's going to span 6-months' worth of development time.
Of course there are caveats. I'm entirely focused on entry-level parts; if you need a Cortex-A72 with a 128-bit-wide dual-rank DRAM bus, sure, go buy a SOM. Also, it should go without saying that it completely depends on you and your company's core competencies. This article is aimed at embedded designers who are usually working on hardware and software for microcontroller-based platforms. If you work at a pure software shop with no in-house EE talent then this article is likely not relevant to you.
A microcontroller usually[1] doesn't have fancy out-of-order execution, fancy caches etc as that would make the execution less deterministic in time. A MMU would as well.
Lacking these features also make microcontrollers a lot slower, and I'm guessing he's thinking about cost/watts per MIPS or something like that. Yes the application processors draw more power overall, but (I assume) they are so much faster it more than makes up for it in dollars per MIPS or watts per MIPS.
So it appears more of a symptom than a cause. But again, I might be wrong.
[1]: https://en.wikipedia.org/wiki/ARM_Cortex-M#Silicon_customiza...
> when compared to application processors, MMUless microcontrollers are horribly expensive, power-hungry, and slow.
What is it about the lack of an MMU that causes a hunger for power?
The STC8 and N76 parts are 8051, the other two are their own design. The HT-66 looks very much like a PIC16 part, and IDE and compiler are totally free.
The STM8 is probably the best-performing part in that price range, and has a free IDE and compiler.
My review includes pretty extensive discussion on the main page, plus separate reviews for all these parts — check it out and let me know if I need to clarify anything!
Fast forward to today… Rust can interop with C natively. Go can as well, though you’re bringing your luggage with you with CGO. .Net hasn’t ever really had that kind of focus. For one, IL, two, Microsoft saw the platform as a cash cow, three, ecosystem lock in allowed a thriving “MVP” contractor community.
For C-based libraries, P/invoking is trivial in C# and has been around forever. And it's cross-platform, working identically on Linux and macOS. I have no idea how you can say ".Net hasn’t ever really had that kind of focus" when it's been a core part of .NET from the start, and .NET relies on P/Invoke to do everything. Go look at all the DllImport() statements in the .NET reference source. Rust FFI is nearly identical in implementation to C#. Go has a slightly different implementation with the CGO module, but whatever, it's close enough. Just step back and remember that, in general, calling into C code is trivial in every language, since it has to be: all these languages will eventually have to hit libc / user32.dll / whatever.
C++ is a totally different story. You can't work with C++ libraries with P/Invoke, that's true... But you also can't work with C++ libraries using Rust or Go, either. Nor Python, Java, Javascript, or really any other popular cross-platform language.
C++ dynamic libraries are really challenging to call into for a variety of reasons. Virtual functions, multiple inheritance, RTTI, name-mangling, struct/class layout, vtable placement, ABI differences, runtimes, etc all make calling into precompiled C++ libraries a nightmare.
In fact, the only way I know of working with pre-compiled C++ libraries is with languages that target specific operating system / compiler collections. E.g., Objective-C++/Swift from Apple, and C++/CLI from Microsoft. These are obviously not cross-platform solutions, since they depend on knowing the exact compiler configuration used to build those libraries in the first place.
For every other language, you either need to manually build C shim libs that you can call into using the C-based approach above, or if you have access to the C++ source code, creating wrappers around it and building it into a module (for example, using pybind11 in Python).