DID and handle resolution was the easiest part of ATProto---as the author says, a library can do the job easily. For Ruby it's DIDKit [2]. Where ATProto really threw me was the seeming non-ownership of record types. Bluesky uses "app.bsky.feed.post" for its records, as seen in the article; there seem to be a lot of these record types, but there doesn't seem to be a central index of these like there are for DIDs, or a standard way of documenting them... and as far as I've been able to find, there's no standard method of turning an at:// URI into an http:// URL.
When my app makes a post on behalf of a user, Bluesky only sends an at:// URI back, which I have to convert myself into an http:// URL to Bluesky. I can only do that with string manipulation, and only because I know, externally, what format the URL should be in. There's no canonical resolution.
[1]: https://toucanpost.com [2]: https://github.com/mackuba/didkit
All of my lightbulbs, occupancy sensors, etc just connect directly to WiFi, and run custom firmware that I wrote so I know exactly what they're doing and how to control them. They make no attempt to access the wider Internet, but they're all on a vlan without Internet access anyway.
It feels like introducing Zigbee to this would just be an extra hub device taking up space, acting as an extra point of failure, and making it more complicated to develop against my devices. As it stands now I can easily manually control devices by piping crap into netcat if I need to for some reason, since they're all just normal IP networked devices. I think I would have to jump though extra hoops to do similar things with Zigbee.
Is the main aspect driving people to Zigbee just that off the shelf consumer smart devices that use WiFi tend to be annoying dogshit, and Zigbee keeps manufacturers in line better? I don't see any reliability or simplicity benefits to it, just the market poisoning WiFi and Zigbee being the only worthwhile alternative.
Nevertheless I am super impressed with their speed and exited with the result. Didn't expect this project to grow so quickly to this state, I though it will take them much more time. For comparison, deno was started way earlier and now they are miles behind (personal feeling). I am considering to use it for my pet projects
You can implement basic operators like map, filter, take, etc. over generators to create pipelines of operations. Very neat abstraction to work with, but like rx, can quickly get hard to reason about.
Recently I wrote some tooling to read, do some operations, and write hundreds of thousands of files locally. Using generators solved having to think about not loading too much stuff into memory since it only yields files when consumed. Also allows you simply implement stuff like batching, like running X requests to a server at a time, and only starting the next batch once the first one is done.
Glossing over this fact leads to a flawed understanding, not a deeper one.
Agreed, and this points to two deeper issues: 1. Fine-grained data access (e.g., sandboxed code can only issue SQL queries scoped to particular tenants) 2. Policy enforced on data (e.g., sandboxed code shouldn't be able to send PII even to APIs it has access to)
Object-capabilities can help directly with both #1 and #2.
I've been working on this problem -- happy to discuss if anyone is interested in the approach.