Timestamp + random seems like it could be a good tradeoff to reduce the ID sizes and still get reasonable characteristics, I'm surprised the article didn't explore there (but then again "timestamps" are a lot more nebulous at universal scale I suppose). Just spitballing here but I wonder if it would be worthwhile to reclaim ten bits of the Snowflake timestamp and use the low 32 bits for a random number. Four billion IDs for each second.
There's a Tom Scott video [2] that describes Youtube video IDs as 11-digit base-64 random numbers, but I don't see any official documentation about that. At the end he says how many IDs are available but I don't think he considers collisions via the birthday paradox.
This is not a dig at AI. If I take this article at face value, AI makes people more productive, assuming they have the taste and knowledge to steer their agents properly. And that's possibly a good thing even though it might have temporary negative side effects for the economy.
>But the AI is writing the traversal logic, the hashing layers, the watcher loops,
But unfortunately that's the stuff I like doing. And also I like communing with the computer: I don't want to delegate that to an agent (of course, like many engineers I put more and more layers between me and the computer, going from assembly to C to Java to Scala, but this seems like a bigger leap).