Um.” Manfred finds it, floating three tiers down an elaborate object hierarchy. It’s flashing for attention. There’s a priority interrupt, an incoming lawsuit that hasn’t propagated up the inheritance tree yet. He prods at the object with a property browser. “I’m afraid I’m not a director of that company, Mr. Glashwiecz. I appear to be retained by it as a technical contractor with nonexecutive power, reporting to the president, but frankly, this is the first time I’ve ever heard of the company. However, I can tell you who’s in charge if you want.” “Yes?” The attorney sounds almost interested. Manfred figures it out; the guy’s in New Jersey. It must be about three in the morning over there. Malice—revenge for waking him up—sharpens Manfred’s voice. “The president of http://agalmic.holdings .root.184.97.AB5 is http://agalmic.holdings .root.184.97.201. The secretary is http://agalmic.holdings .root.184.D5, and the chair is http://agalmic.holdings .root.184.E8.FF. All the shares are owned by those companies in equal measure, and I can tell you that their regulations are written in Python. Have a nice day, now!”
This article reminds me of another book [1] called Holacracy where how a business is run is systematized according to other pre-defined principles. David Allen, a productivity trainer, used it at his own company for several years before eventually moving away from it because the ongoing overhead to keep its system up was too much.
I wonder if this system will end up like that as well. I love the idea, but I think humans operate at a squishier level than our computers do, there's a risk of 'massive bureaucratic dehumanization and inflexible processes' and the Iron Law of Organizations that make such efforts as that book and this article fraught with peril. Taylorism has its limits.
But hey, if this works, I'll be excited to see more businesses adopting better practices and less painful fumbling around trying to do practices in an organic or unplanned way.
[1] https://www.holacracy.org/blog/dac-ceo-reflects-on-holacracy...
https://huggingface.co/nvidia/nemotron-speech-streaming-en-0...
https://github.com/m1el/nemotron-asr.cpphttps://huggingface.co/m1el/nemotron-speech-streaming-0.6B-g...
I used to use Dragon Dictation to draft my first novel, had to learn a 'language' to tell the rudimentary engine how to recognize my speech.
And then I discovered [1] and have been using it for some basic speech recognition, amazed at what a local model can do.
But it can't transcribe any text until I finish recording a file, and then it starts work, so very slow batches in terms of feedback latency cycles.
And now you've posted this cool solution which streams audio chunks to a model in infinite small pieces, amazing, just amazing.
Now if only I can figure out how to contribute to Handy or similar to do that Speech To Text in a streaming mode, STT locally will be a solved problem for me.
Oh well, still an interestign article that shows statistics can be posed in such a way to say anything about anything if you squint them.
One thinf I would add is that a given incident can have differing internal vs external impact, which can drive up its cumulative impact.
For something to be an actor, it should be able to:
- Send and receive messages
- Create other actors
- Change how the next message is handled (becomes in Erlang)
I think the last one is what makes it different it from simple message passing, and what makes it genius: state machines consuming queues.
https://www.theargumentmag.com/p/no-country-for-young-famili...
Meanwhile, a really important dynamic to keep in mind is that in most inner-ring suburbs in the US, the primary driver of home values (and of property taxes) are school systems. If you don't actively enact policies that work against the dynamic, you get trapped in a spiral of increasing prices, in part because parents can bid up prices and suffer them only for the span of time their kids are in school --- "renting the schools".
This is the primary failure of all of the AI creative tooling, not even necessarily that it does too much, but that the effort of the artist doesn't correlate to good output. Sometimes you can get something usable in 1 or 2 prompts, and it almost feels like magic/cheating. Other times you spend tons of time going over prompts repeatedly trying to get it to do something, and are never successful.
Any other toolset I can become familiar and better equipped to use. AI-based tools are uniquely unpredictable and so I haven't really found any places beyond base concepting work where I'm comfortable making them a permanent component.
And more generally, to your nod that some day artists will use AI: I mean, it's not impossible. That being said, as an artist, I'm not comfortable chaining my output to anything as liquid and ever-changing and unreliable as anything currently out there. I don't want to put myself in a situation where my ability to create hinges on paying a digital landlord for access to a product that can change at any time. I got out of Adobe for the same reason: I was sick of having my workflows frustrated by arbitrary changes to the tooling I didn't ask for, while actual issues went unsolved for years.
Edit: I would also add the caveat that, the more work the tool does, the less room the artist has to actually be creative. That's my main beef with AI imagery: it literally all looks the same. I can clock AI stuff incredibly well because it has a lot of the same characteristics: things are too shiny is weirdly the biggest giveaway, I'm not sure why AI's think everything is wet at all times, but it's very consistent. It also over-populates scenes; more shit in the frame isn't necessarily a good thing that contributes to a work, and AI has no concept at all of negative space. And if a human artist has no space to be creative in the tool... well they're going to struggle pretty hard to have any kind of recognizable style.
It has full image generation mode, it has an animation mode, it has a live mode where you can draw a blob of images and it will refine it 2-50 steps only in that area.
So you are no longer doing per line stroke and saved brush settings, but you are still painting and composing an image yourself, down to a pixel by pixel rate. It's just that the tool it gives is WAY more compute intensive, the AI is sort of rendering a given part of a drawing as you specify as many times as you need.
How much of that workflow is just prompting a one-shot image, vs photoshopping +++ an image together until it meets your exact specifications?
No, the final image cannot be copyrighted under current US law in 2026, but for use in private settings like tabletop RPGs...my production values have gone way up and I didn't need to get a MFA degree in The old Masters drawing or open a drawing studio to get those images.