Deleted Comment
Deleted Comment
Deleted Comment
Deleted Comment
I used to work on a kernel debugging tool and had a particularly annoying security researcher bug me about a signed/unsigned integer check that could result in a target kernel panic with a malformed debug packet. Like you couldn't do the same by just writing random stuff at random addresses, since you are literally debugging the kernel with full memory access. Sad.
What I do is I add the following notice to my GitHub issue template: "X is a passion project and issues are triaged based on my personal availability. If you need immediate or ongoing support, then please purchase a support contract through my software company: [link to company webpage]".
Metal Shading Language for example uses a subset of C++, and HLSL and GLSL are C-like languages.
In my view, it is nice to have an equivalent syntax and language for both CPU and GPU code, even though you still want to write simple code for GPU compute kernels and shaders.
Since then Microsoft has had no real answer for "how do I write desktop applications for Windows?" other than "use Electron".
(If they were still introducing new widget sets they'd be converting the 'modern' dialogs to something 'postmodern' while still having Win '95 dialogs in there)
Microsoft has been pushing WinUI the past few years, with WinUI 3 being the latest recommended UI toolkit [1]. I think what's interesting about WinUI 3 is it's not built into Windows - you have to ship the whole toolkit with your app, just as you would GTK or Qt on Windows. I find that a perplexing direction for a "native" toolkit.
[1] https://learn.microsoft.com/en-us/windows/apps/winui/winui3/
It was pretty clear, even 20 years ago, that OOP had major problems in terms of what Casey Muratori now calls "hierarchical encapsulation" of problems.
One thing that really jumped out at me was his quote [0]:
> I think when you're designing new things, you should focus on the hardest stuff. ... we can always then take that and scale it down ... but it's almost impossible to take something that solves simple problems and scale it up into something that solves hard [problems]
I understand the context but this, in general, is abysmally bad advice. I'm not sure about language design or system architecture but this is almost universally not true for any mathematical or algorithmic pursuit.
[0] https://www.youtube.com/watch?v=wo84LFzx5nI&t=8284s
Unfortunately, the "history" omits prototype-based OO (Self, Io, Lua, etc.) which doesn't suffer from many of the "issues" cited by the speaker.