Readit News logoReadit News
DannyBee commented on How can AI ID a cat?   quantamagazine.org/how-ca... · Posted by u/sonabinu
sillysaurusx · 21 hours ago
How do you interface with your cat’s chip? Mine is chipped but it never occurred to me to build a detector.
DannyBee · 12 hours ago
125khz reader. The real problem is distance most of the time. Cats are curious enough about the doors that they will go right next to them. Most dogs won't.
DannyBee commented on How can AI ID a cat?   quantamagazine.org/how-ca... · Posted by u/sonabinu
reilly3000 · a day ago
Long have I wanted a cat door that would only open for my cats, not the mean neighborhood one that eats their food. I can’t be the only one. I’ve been meaning to try to build one with a camera, rPi and Google Coral, but never got around to it. There’s the matter of the locking mechanism and more.
DannyBee · 12 hours ago
I have built two of these for dogs. It's really not hard,w hether you go completely from scratch or use something premade.

If you want something mostly premade,go get an autoslide. If you want to do it completely from scratch:

1. RFID/bluetooth proximity is much easier to work with than camera + rpi + AI. For the usecase you are talking about, AI is not just overkill, but will make it actively harder to achieve your goal

2. Locking is pretty easy depending on motor mechanism - either a cheap relay'd magnetic lock, or simply a motor that can't be backdriven easily.

Motor wise, you can either use the rack and pinion style that autoslide does, or a simple linear motor if you don't want to deal with gear tracks.

Overall, i went the autoslide route and had it all set up and working in an hour or two.

DannyBee commented on I'm too dumb for Zig's new IO interface   openmymind.net/Im-Too-Dum... · Posted by u/begoon
hardwaresofton · 2 days ago
Yeah, thinking about this attitude positively, maybe it’s a feature — if only hard core people can comfortably figure it out, you get higher quality contributions?

Not trying to imply that’s an explicit goal (probably instead just a resource problem), but an observation

DannyBee · 2 days ago
More likely you get people who have the same groupthink, which has both ups and downs. Their contribution quality is probably not that well correlated.
DannyBee commented on I'm too dumb for Zig's new IO interface   openmymind.net/Im-Too-Dum... · Posted by u/begoon
0x696C6961 · 2 days ago
Writing good docs/examples takes a lot of effort. It would be a waste considering the amount of churn that happens in zig at this point.
DannyBee · 2 days ago
Only in a world where features are made without thought. Documentation is not just for your users. Writing the development documentation helps you make better features in the first place.

The zig users on this thread seem to not understand this, and all seem to think documentation is a thing you write later for users when everything settles down. Or is somehow otherwise "in the way" of feature and API development speed.

That is a very strange view.

If writing developer documentation is having serious affect on your language feature velocity, you are doing something very wrong. Instead, writing it should make things move faster, becuase, at a minimum, others understand what you are trying to do, how it is going to work, and can help. Also, it helps you think through it yourself and whether what you are writing makes any sense, etc.

Yes there are people who can do this all without documentation, but there are 100x as many who can't, but will still give high quality contributions that will move you along faster if you enable them to help you. Throwing out the ability to have these folks help you is, at a minimum, self-defeating.

I learned this the hard way, because i'm one of those folks who can just stare at random undocumented messy code and know what it actually does, what the author was probably trying to do, etc, and it took years till i learned most people were not like this.

DannyBee commented on Copilot broke audit logs, but Microsoft won't tell customers   pistachioapp.com/blog/cop... · Posted by u/Sayrus
malfist · 5 days ago
If you're stringing together a bunch of MCPs you probably also have to string together a bunch of authorization mechanisms. Try having your search engine confirm live each persons access to each possible row.

It's absolutely a hard problem and it isn't well solved

DannyBee · 5 days ago
Yes, if you try to string together 30 systems with no controls and implement controls at the end it can be hard and slow - "this method i designed to not work doesn't work" is not very surprising.

But the reply i made was to " This means Vector databases, Search Indexes or fancy "AI Search Databases" would be required on a per user basis or track the access rights along with the content, which is infeasible and does not scale."

IE information retrieval.

Access control in information retrieval is a very well studied.

Making search engines, etc that effectively confirm user access to each possible record is feasible and common (They don't do it exactly this way but the result is the same), and scalable.

Hell, we even known how to do private information retrieval with access control in scalable ways.

PIR = the server does not know what the query was, or the result was, but still retrieves the result.

So we know how to make it so not only does the server does not know what was queried or retrieved by a user, but each querying user still only can access records they are allowed to.

Overhead of this, which is much harder than non-private information retrieval with access control, is only 2-3x in computation. See, e.g., https://dspace.mit.edu/handle/1721.1/151392 for one example of such a system. There are others.

So even if your 2ms retrieval latency was all CPU and 0 I/O, it would only become 4-6ms do to this.

If you remove the PIR part, as i said, it's much easier, and the overhead is much much less, since it doesn't involve tons and tons of computationally expensive encryption primitives (though some schemes still involve some).

DannyBee commented on Copilot broke audit logs, but Microsoft won't tell customers   pistachioapp.com/blog/cop... · Posted by u/Sayrus
planb · 5 days ago
I am assigned to develop a company internal chatbot that accesses confidential documents and I am having a really hard time communicating this problem to executives:

As long as not ALL the data the agent hat access too is checked against the rights of the current user placing the request, there WILL be ways to leak data. This means Vector databases, Search Indexes or fancy "AI Search Databases" would be required on a per user basis or track the access rights along with the content, which is infeasible and does not scale.

And as access rights are complex and can change at any given moment, that would still be prone to race conditions.

DannyBee · 5 days ago
"would be required on a per user basis or track the access rights along with the content, which is infeasible and does not scale"

Citation needed.

Most enterprise (homegrown or not) search engine products have to do this, and have been able to do it effectively at scale, for decades at this point.

This is a very well known and well-solved problem, and the solutions are very directly applicable to the products you list.

It is, as they say, a simple matter of implementation - if they don't offer it, it's because they haven't had the engineering time and/or customer need to do it.

Not because it doesn't scale.

DannyBee commented on Copilot broke audit logs, but Microsoft won't tell customers   pistachioapp.com/blog/cop... · Posted by u/Sayrus
doomslice · 5 days ago
Let's say you have 100000 documents in your index that match your query but only 10 of them the user has access to:

A basic implementation will return the top, let's say 1000, documents and then do the more expensive access check on each of them. Most of the time, you've now eliminated all of your search results.

Your search must be access aware to do a reasonable job of pre-filtering the content to documents the user has access to, at which point you then can apply post-filtering with the "100% sure" access check.

DannyBee · 5 days ago
Yes. But this is still an incredibly well known and solved problem. As an example - google's internal structured search engines did this decades ago at scale.
DannyBee commented on Giving people money helped less than I thought it would   theargumentmag.com/p/givi... · Posted by u/tekla
vannevar · 5 days ago
Contrary to the author's assertion, the Denver Basic Income study, which gave $1000/mo, found a significant improvement in housing for the test group vs the controls. She misread the results, failing to note the initial housing rates for control vs test.

https://www.denverbasicincomeproject.org/research

DannyBee · 5 days ago
I read your other comment with the numbers and I don't think it makes the amazing difference you seem to. Certainly not to the degree i think it makes it all worth it. Maybe if they at least plateau in different places, but they don't. I think you seem fairly defensive (you've posted the same response repeatedly) about what still seem like middling results.

As a basic example: While your point about the starting percentages is correct, the study lost partipicants over time. Group A (the 1k/month group) lost 33% of its participants by T3, and Group C (the 50/month comparison group) lost 38% of its participants.

The table you quote from the final study doesn't include the people who were lost, only those who filled out both surveys, T1 and T3. So using it to say they helped a greater percent of people is a bit weird.

They also don't tell you the table for T2 in the final report, you have to go look at the interim one.

The T2 data makes T1->T3 look much less impressive, and definitely doesn't seem to support some amazing gain for group 1.

As far as i can tell, the data looks even less impressive for your claim if you do t1->t2, and T2->t3, instead of just t1->t3 with only both included.

It would certainly be easier to tease apart the point you try to make if they reported the number of originally unhoused vs originally housed that were retained at each timepoint, but they don't.

So what am i missing? Why should I look at these results and think it is amazing?

(Also, I don't think i'd agree the main argument the author makes is based and refuted solely by the results of the denver study)

DannyBee commented on Electricity prices are climbing more than twice as fast as inflation   npr.org/2025/08/16/nx-s1-... · Posted by u/geox
danielmarkbruce · 7 days ago
Because public operation of infrastructure has often not gone well. And no matter who owns it, there is a cost of capital.
DannyBee · 7 days ago
It's gone fine. It's always fun to hear about the wonders of privatization where everyone conveniently ignores that the vast majority of private businesses fail miserably. Over 50 percent in five years. Mostly due to mismanagement of money, the thing they are supposed to be better at. The rate is even higher (80-90 percent) if we were to look at small businesses.

They do have better PR though.

DannyBee commented on Judge Blocks FTC Investigation of Media Matters   nytimes.com/2025/08/15/te... · Posted by u/duxup
SilverElfin · 9 days ago
How can a judge block what amounts to an anti trust investigation into Media Matters and the advertising companies? Calling it a first amendment violation seems like a stretch. When apartment owners signal pricing to each other is that also off limits because of the first amendment?
DannyBee · 9 days ago
The judge blocked a very broad CID. See page 10 for a list of just some of the documents requested.

I don't think you can argue in good faith that the vast majority of these have literally anything to do with antitrust.

u/DannyBee

KarmaCake day30303June 21, 2011
About
Xoogler just enjoying life for a while after a long time in tech. I'm also an open source lawyer.

If there is anything i can help you with, feel free to poke me.

View Original