Last night a friend from India popped up on signal. I told him "Welcome!" and he said "You finally wore me down, I've left WhatsApp and I'm trying to move my family off of it..."
This is a really controversial pattern for GUIs. In one camp, the GUI is really a skin over the CLI that acts like a virtual user, translating GUI inputs into underlying CLI. In the other camp, the GUI is all there is (e.g., Windows) and there is no underlying OS that can be accessed via CLI: in fact, the CLI is a "fake GUI" (win32 apps written without a Window). I can't say which is better, but it is fascinating to see that this was an "original pattern".
BEST. MAGAZINE. EVER!!!
(Well, other than computer magazines... <g>)
What I've always wanted to read area transcripts of the early players (like Gygax), perhaps at GenCon, working out very difficult encounters. For example: how did the early DMs approach role-playing a supergenius demigod with high-level cleric/MU spells? That's got to be very hard character to inhabit. I mean the Lolth module, Q1? come on, there are so many high level intelligent creatures in that endgame...
This is a link to a dataset, unless I'm missing something it's not about anomaly detection. I looked into this area a few years ago and always try to keep my eye open for breakthroughs... care to share any other links?
Huh? If anything I would say ML is way more trial-and-error focused than imperative programming.
While I agree that everyone was shocked, myself included, when we saw how well SSD and YOLO worked, the last mile problem is stagnating. What I mean is: 7 years ago I wrote an image pipeline for a company using traditional AI methods. It was extremely challenging. When we saw SSDMobileNet do the same job 10x faster with a fraction of the code, our jaws dropped. Which is why the dev ship turned on a dime: there's something big in there.
The industry is stagnated for exactly the reasons brought up: we don't know how to squeeze out the last mile problem because NNs are EFFING HARD and research is very math heavy: e.g., it cannot be hacked by a Zuck-type into a half-assed product overnight, it needs to be carefully researched for years. This makes programmers sad, because by nature we love to brute force trial-and error our code, and homey don't play that game with machine learning.
However, places where it isn't stagnating are things like vibration and anomaly detection. This is a case where https://github.com/YumaKoizumi/ToyADMOS-dataset really shines because it adds something that didn't exist before, and it doesn't have to be 100% perfect: anything is better than nothing.
At Embedded World last year I saw tons of FPGA solutions for rejecting parts on assembly lines. Since every object appears nearly in canonical form (good lighting, centered, homogeneous presentation), NN's are kicking ass bigtime in that space.
It is important to remember Self-Driving Car Magic is just the consumer-facing hype machine. ML/NNs are working spectacularly well in some domains.