If you want something mostly premade,go get an autoslide. If you want to do it completely from scratch:
1. RFID/bluetooth proximity is much easier to work with than camera + rpi + AI. For the usecase you are talking about, AI is not just overkill, but will make it actively harder to achieve your goal
2. Locking is pretty easy depending on motor mechanism - either a cheap relay'd magnetic lock, or simply a motor that can't be backdriven easily.
Motor wise, you can either use the rack and pinion style that autoslide does, or a simple linear motor if you don't want to deal with gear tracks.
Overall, i went the autoslide route and had it all set up and working in an hour or two.
Not trying to imply that’s an explicit goal (probably instead just a resource problem), but an observation
The zig users on this thread seem to not understand this, and all seem to think documentation is a thing you write later for users when everything settles down. Or is somehow otherwise "in the way" of feature and API development speed.
That is a very strange view.
If writing developer documentation is having serious affect on your language feature velocity, you are doing something very wrong. Instead, writing it should make things move faster, becuase, at a minimum, others understand what you are trying to do, how it is going to work, and can help. Also, it helps you think through it yourself and whether what you are writing makes any sense, etc.
Yes there are people who can do this all without documentation, but there are 100x as many who can't, but will still give high quality contributions that will move you along faster if you enable them to help you. Throwing out the ability to have these folks help you is, at a minimum, self-defeating.
I learned this the hard way, because i'm one of those folks who can just stare at random undocumented messy code and know what it actually does, what the author was probably trying to do, etc, and it took years till i learned most people were not like this.
It's absolutely a hard problem and it isn't well solved
But the reply i made was to " This means Vector databases, Search Indexes or fancy "AI Search Databases" would be required on a per user basis or track the access rights along with the content, which is infeasible and does not scale."
IE information retrieval.
Access control in information retrieval is a very well studied.
Making search engines, etc that effectively confirm user access to each possible record is feasible and common (They don't do it exactly this way but the result is the same), and scalable.
Hell, we even known how to do private information retrieval with access control in scalable ways.
PIR = the server does not know what the query was, or the result was, but still retrieves the result.
So we know how to make it so not only does the server does not know what was queried or retrieved by a user, but each querying user still only can access records they are allowed to.
Overhead of this, which is much harder than non-private information retrieval with access control, is only 2-3x in computation. See, e.g., https://dspace.mit.edu/handle/1721.1/151392 for one example of such a system. There are others.
So even if your 2ms retrieval latency was all CPU and 0 I/O, it would only become 4-6ms do to this.
If you remove the PIR part, as i said, it's much easier, and the overhead is much much less, since it doesn't involve tons and tons of computationally expensive encryption primitives (though some schemes still involve some).
As long as not ALL the data the agent hat access too is checked against the rights of the current user placing the request, there WILL be ways to leak data. This means Vector databases, Search Indexes or fancy "AI Search Databases" would be required on a per user basis or track the access rights along with the content, which is infeasible and does not scale.
And as access rights are complex and can change at any given moment, that would still be prone to race conditions.
Citation needed.
Most enterprise (homegrown or not) search engine products have to do this, and have been able to do it effectively at scale, for decades at this point.
This is a very well known and well-solved problem, and the solutions are very directly applicable to the products you list.
It is, as they say, a simple matter of implementation - if they don't offer it, it's because they haven't had the engineering time and/or customer need to do it.
Not because it doesn't scale.
A basic implementation will return the top, let's say 1000, documents and then do the more expensive access check on each of them. Most of the time, you've now eliminated all of your search results.
Your search must be access aware to do a reasonable job of pre-filtering the content to documents the user has access to, at which point you then can apply post-filtering with the "100% sure" access check.
As a basic example: While your point about the starting percentages is correct, the study lost partipicants over time. Group A (the 1k/month group) lost 33% of its participants by T3, and Group C (the 50/month comparison group) lost 38% of its participants.
The table you quote from the final study doesn't include the people who were lost, only those who filled out both surveys, T1 and T3. So using it to say they helped a greater percent of people is a bit weird.
They also don't tell you the table for T2 in the final report, you have to go look at the interim one.
The T2 data makes T1->T3 look much less impressive, and definitely doesn't seem to support some amazing gain for group 1.
As far as i can tell, the data looks even less impressive for your claim if you do t1->t2, and T2->t3, instead of just t1->t3 with only both included.
It would certainly be easier to tease apart the point you try to make if they reported the number of originally unhoused vs originally housed that were retained at each timepoint, but they don't.
So what am i missing? Why should I look at these results and think it is amazing?
(Also, I don't think i'd agree the main argument the author makes is based and refuted solely by the results of the denver study)
They do have better PR though.
I don't think you can argue in good faith that the vast majority of these have literally anything to do with antitrust.