Wow, I would have never discovered this and I work full time with .NET, and have experience with Avalonia! It's just that I normally don't look for Pi-related stuff there, rather heading to Python as a clear de facto Pi language and libraries. Really cool to see this kind of niche being carved out by .NET and Avalonia! Too bad it's generally easier to get I/O boards to work out of the box on Python, with high level drivers and libraries often already written. .NET of course also has its way to interface but you'll likely end up doing it on a lower level due to the lack of drivers. There _are_ drivers but not as many and it's more likely to end up with more generic GPIO pin reading/writing libraries.
Well, there are a lot of efforts to make C# / .NET suitable for raspberry projects.
NativeAOT is on the way, so that you can compile small binaries without any dependencies.
If that does not work, you can compile your project as "ReadyToRun" with "Trimming", which will treeshake all unneeded runtime stuff and prepare an acceptable small binary.
One way to overcome the problem with missing drivers is to setup a python API (e.g. Flask) and triggering things through the Avalonia UI. Or you could wrap a binary execution via CliWrap, an excellent library for .NET to run external processes.
I once wrote a little eventhub[1] to run on the Raspberry PI, it is just an experiment but worked ok.
There is also .NET IoT[2], which exactly targets these platforms.
I've replaced an industrial windows workstation with PI + Avalonia in a factory. Way more compact, you don't have to care about Windows anymore. Is the PI industrial grade? No, but we have a spare PI ready, and SSD with preinstalled OS. So you can fix everything in a matter of minutes.
Although I had to rewrite the software, because the original was WinForms, it was a pretty simple application.
I've been trying to setup a pipeline of pass through hdmi from an HDMI input to an HDMI output with an OrangePI5 Plus. I could talk for a long time (now) about the issues with vendor supplied kernels and unsupported hardware. I was completely naive until I had the hardware in hand, having not done any embedded work.
Right now, the best of breed thought that I have is to run Weston, have a Qt application that is full screen, and to use DMA buffers so I can do some zero copy processing. Rockchip has their own MPP and RGA libraries that are tied into the Mali GPU, and I'm not smart enough to understand the current level of driver/userspace support to not leverage these libraries.
Rockchip and the ARM ecosystem is such a mess.
If anyone has any pointers, experience, approaches, code, etc, I would love to see it.
Not sure the kind of processing you need to do on the video stream, but have you considered giving `ffmpeg` a try if you just need plain pass-thru from video input to output? `ffmpeg` might be built with support for the Mali libraries you mention for the OS you are using. If you are able to run `weston`, `ffmpeg` should be able to output directly to the DRM card thru the use of SDL2 (assuming it was built with it).
If the HDMI-USB capture card that outputs `mjpeg` exposes a `/dev/video` node, then it might be as simple as running:
An alternative could be if you can get a Raspberry Pi 3 or even 2, and can get a distro where `omxplayer` can still be installed. You can then use `omxplayer` to display your mjpeg stream on the output of your choice, just make sure the `kms/fkms` dtoverlay is not loaded because `omxplayer` works directly with DispmanX/GPU (BTW, not compatible with Pi 4 and above and much less 5), which contains a hardware `mjpeg` decoder, so for the most part, bytes are sent directly to the GPU.
Looks helpful! I assume ffmpeg needs to be built with SDL for this to work? I couldn't get it to work with my current minimal compile, but I don't think the board I'm working on has SDL, so might need to install that and recompile
In general DRM/KMS can be quite confusing a there seems little userland documentation available. I assume you get the DMA buffers from the HDMI input somehow? If so, you should be able to use drmModeAddFB2WithModifiers to create a DRM framebuffer from them. Then attach that to a DRM plane, place that on a CRTC and then schedule a page flip after modesetting a video mode.
The advantage would be that you can directly run without starting into any kind of visual environment first. But it's a huge mess to get going: I wrote quite a bit of Pi4/5 code recently to make a zero copy HEVC/H264 decoder working and it was a quite a challenge. Maybe code like https://github.com/dvdhrm/docs/tree/master/drm-howto can help?
The HDMI receive device on the OrangePi5 plus is in a semi-functional state. Collabora is in the process of up-streaming code so the RK3588 will work with the mainline linux kernel.
Until that happens, working driver code is in a very transitive space.
To get going, and sidestep that problem, I've purchased an HDMI to USB capture cards that use MacroSilicon chips. I've some thought of using a cheaper CPU in the future with a daughter board based on this project which uses MacoSilicon chips: https://github.com/YuzukiHD/YuzukiLOHCC-PRO, which made it potentially not a waste of time to dig into.
The MacroSilicon HDMI to USB capture cards output MJEPG, which Rockchip's MPP library has decoder for.
So the thought is: (1) allocate a DMA buffer (2) set that DMA buffer as the MJEPG decoder target (3) get the decoded data to display (sounds like I may need to encode again?) & a parallel processing pipeline
I'll dig into the stuff you've sent over, very helpful thanks for the pointers!
I've thought about switching to Pi4/5 for this. Based on your experience, would you recommend that platform?
Not the same thing but there is this project that does digital rgb to hdmi using a pi https://github.com/hoglet67/RGBtoHDMI I believe they use a custom firmware on the pi and a CPLD, but you could probably eliminate that doing hdmi to hdmi.
They don't have a MJPEG decoder yet, which is a blocker for hardware acceleration, but I'm going to try and patch the library with the author and get it added. Thanks for pointing it out!
I might end up doing that. When I was first digging into it, the Qt documentation seemed confusing. But after sinking 10-20 hours into everything, it's starting to click a lot more.
This is a great library, but as far as I understand it is rather for bare-metal or low-resource embedded operating systems, not for Linux. The OP apparently runs Linux. Could he also use LVGL on Linux and write to the FB device?
Oh, yeah, you're right. It seemed like OP was trying to prove he could "bare bones" it because the article was about how to avoid everything but a framebuffer, so I thought I'd offer this up... LVGL is as bare-bones as it gets!
One problem with Raspberry Pi displays is that not all of them provide vsync signal in SPI mode. That leads to high CPU usage (due to very high frame rate) and its generally inefficient. Choose your display carefully.
The way I did it was to run Weston without any icons or anything and using systemd to start my app and also having systemd reopen my app if it closes. Worked well enough.
NativeAOT is on the way, so that you can compile small binaries without any dependencies.
If that does not work, you can compile your project as "ReadyToRun" with "Trimming", which will treeshake all unneeded runtime stuff and prepare an acceptable small binary.
One way to overcome the problem with missing drivers is to setup a python API (e.g. Flask) and triggering things through the Avalonia UI. Or you could wrap a binary execution via CliWrap, an excellent library for .NET to run external processes.
I once wrote a little eventhub[1] to run on the Raspberry PI, it is just an experiment but worked ok.
There is also .NET IoT[2], which exactly targets these platforms.
1: https://github.com/sandreas/eventhub/tree/main/eventhub
2: https://dotnet.microsoft.com/en-us/apps/iot
Although I had to rewrite the software, because the original was WinForms, it was a pretty simple application.
Right now, the best of breed thought that I have is to run Weston, have a Qt application that is full screen, and to use DMA buffers so I can do some zero copy processing. Rockchip has their own MPP and RGA libraries that are tied into the Mali GPU, and I'm not smart enough to understand the current level of driver/userspace support to not leverage these libraries.
Rockchip and the ARM ecosystem is such a mess.
If anyone has any pointers, experience, approaches, code, etc, I would love to see it.
If the HDMI-USB capture card that outputs `mjpeg` exposes a `/dev/video` node, then it might be as simple as running:
`SDL_VIDEODRIVER=kmsdrm ffmpeg -f video4linux2 -input_format mjpeg -i /dev/video0 -f opengl "hdmi output"`
An alternative could be if you can get a Raspberry Pi 3 or even 2, and can get a distro where `omxplayer` can still be installed. You can then use `omxplayer` to display your mjpeg stream on the output of your choice, just make sure the `kms/fkms` dtoverlay is not loaded because `omxplayer` works directly with DispmanX/GPU (BTW, not compatible with Pi 4 and above and much less 5), which contains a hardware `mjpeg` decoder, so for the most part, bytes are sent directly to the GPU.
Hope some of this info can be of help.
The advantage would be that you can directly run without starting into any kind of visual environment first. But it's a huge mess to get going: I wrote quite a bit of Pi4/5 code recently to make a zero copy HEVC/H264 decoder working and it was a quite a challenge. Maybe code like https://github.com/dvdhrm/docs/tree/master/drm-howto can help?
Until that happens, working driver code is in a very transitive space.
To get going, and sidestep that problem, I've purchased an HDMI to USB capture cards that use MacroSilicon chips. I've some thought of using a cheaper CPU in the future with a daughter board based on this project which uses MacoSilicon chips: https://github.com/YuzukiHD/YuzukiLOHCC-PRO, which made it potentially not a waste of time to dig into.
The MacroSilicon HDMI to USB capture cards output MJEPG, which Rockchip's MPP library has decoder for.
So the thought is: (1) allocate a DMA buffer (2) set that DMA buffer as the MJEPG decoder target (3) get the decoded data to display (sounds like I may need to encode again?) & a parallel processing pipeline
I'll dig into the stuff you've sent over, very helpful thanks for the pointers!
I've thought about switching to Pi4/5 for this. Based on your experience, would you recommend that platform?
I have tested the mpp SDK a bit and the code is easy to work with, with examples for encode and decode, both sync and async.
Thanks for the pointer!
https://lvgl.io/
QML is nice, animations were much smoother than I expected.
[0] - https://github.com/cage-kiosk/cage