Readit News logoReadit News
DannyBee commented on I'm too dumb for Zig's new IO interface   openmymind.net/Im-Too-Dum... · Posted by u/begoon
hardwaresofton · a day ago
Yeah, thinking about this attitude positively, maybe it’s a feature — if only hard core people can comfortably figure it out, you get higher quality contributions?

Not trying to imply that’s an explicit goal (probably instead just a resource problem), but an observation

DannyBee · a day ago
More likely you get people who have the same groupthink, which has both ups and downs. Their contribution quality is probably not that well correlated.
DannyBee commented on I'm too dumb for Zig's new IO interface   openmymind.net/Im-Too-Dum... · Posted by u/begoon
0x696C6961 · a day ago
Writing good docs/examples takes a lot of effort. It would be a waste considering the amount of churn that happens in zig at this point.
DannyBee · a day ago
Only in a world where features are made without thought. Documentation is not just for your users. Writing the development documentation helps you make better features in the first place.

The zig users on this thread seem to not understand this, and all seem to think documentation is a thing you write later for users when everything settles down. Or is somehow otherwise "in the way" of feature and API development speed.

That is a very strange view.

If writing developer documentation is having serious affect on your language feature velocity, you are doing something very wrong. Instead, writing it should make things move faster, becuase, at a minimum, others understand what you are trying to do, how it is going to work, and can help. Also, it helps you think through it yourself and whether what you are writing makes any sense, etc.

Yes there are people who can do this all without documentation, but there are 100x as many who can't, but will still give high quality contributions that will move you along faster if you enable them to help you. Throwing out the ability to have these folks help you is, at a minimum, self-defeating.

I learned this the hard way, because i'm one of those folks who can just stare at random undocumented messy code and know what it actually does, what the author was probably trying to do, etc, and it took years till i learned most people were not like this.

DannyBee commented on Copilot broke audit logs, but Microsoft won't tell customers   pistachioapp.com/blog/cop... · Posted by u/Sayrus
malfist · 4 days ago
If you're stringing together a bunch of MCPs you probably also have to string together a bunch of authorization mechanisms. Try having your search engine confirm live each persons access to each possible row.

It's absolutely a hard problem and it isn't well solved

DannyBee · 4 days ago
Yes, if you try to string together 30 systems with no controls and implement controls at the end it can be hard and slow - "this method i designed to not work doesn't work" is not very surprising.

But the reply i made was to " This means Vector databases, Search Indexes or fancy "AI Search Databases" would be required on a per user basis or track the access rights along with the content, which is infeasible and does not scale."

IE information retrieval.

Access control in information retrieval is a very well studied.

Making search engines, etc that effectively confirm user access to each possible record is feasible and common (They don't do it exactly this way but the result is the same), and scalable.

Hell, we even known how to do private information retrieval with access control in scalable ways.

PIR = the server does not know what the query was, or the result was, but still retrieves the result.

So we know how to make it so not only does the server does not know what was queried or retrieved by a user, but each querying user still only can access records they are allowed to.

Overhead of this, which is much harder than non-private information retrieval with access control, is only 2-3x in computation. See, e.g., https://dspace.mit.edu/handle/1721.1/151392 for one example of such a system. There are others.

So even if your 2ms retrieval latency was all CPU and 0 I/O, it would only become 4-6ms do to this.

If you remove the PIR part, as i said, it's much easier, and the overhead is much much less, since it doesn't involve tons and tons of computationally expensive encryption primitives (though some schemes still involve some).

DannyBee commented on Copilot broke audit logs, but Microsoft won't tell customers   pistachioapp.com/blog/cop... · Posted by u/Sayrus
planb · 4 days ago
I am assigned to develop a company internal chatbot that accesses confidential documents and I am having a really hard time communicating this problem to executives:

As long as not ALL the data the agent hat access too is checked against the rights of the current user placing the request, there WILL be ways to leak data. This means Vector databases, Search Indexes or fancy "AI Search Databases" would be required on a per user basis or track the access rights along with the content, which is infeasible and does not scale.

And as access rights are complex and can change at any given moment, that would still be prone to race conditions.

DannyBee · 4 days ago
"would be required on a per user basis or track the access rights along with the content, which is infeasible and does not scale"

Citation needed.

Most enterprise (homegrown or not) search engine products have to do this, and have been able to do it effectively at scale, for decades at this point.

This is a very well known and well-solved problem, and the solutions are very directly applicable to the products you list.

It is, as they say, a simple matter of implementation - if they don't offer it, it's because they haven't had the engineering time and/or customer need to do it.

Not because it doesn't scale.

DannyBee commented on Copilot broke audit logs, but Microsoft won't tell customers   pistachioapp.com/blog/cop... · Posted by u/Sayrus
doomslice · 4 days ago
Let's say you have 100000 documents in your index that match your query but only 10 of them the user has access to:

A basic implementation will return the top, let's say 1000, documents and then do the more expensive access check on each of them. Most of the time, you've now eliminated all of your search results.

Your search must be access aware to do a reasonable job of pre-filtering the content to documents the user has access to, at which point you then can apply post-filtering with the "100% sure" access check.

DannyBee · 4 days ago
Yes. But this is still an incredibly well known and solved problem. As an example - google's internal structured search engines did this decades ago at scale.
DannyBee commented on Giving people money helped less than I thought it would   theargumentmag.com/p/givi... · Posted by u/tekla
vannevar · 5 days ago
Contrary to the author's assertion, the Denver Basic Income study, which gave $1000/mo, found a significant improvement in housing for the test group vs the controls. She misread the results, failing to note the initial housing rates for control vs test.

https://www.denverbasicincomeproject.org/research

DannyBee · 5 days ago
I read your other comment with the numbers and I don't think it makes the amazing difference you seem to. Certainly not to the degree i think it makes it all worth it. Maybe if they at least plateau in different places, but they don't. I think you seem fairly defensive (you've posted the same response repeatedly) about what still seem like middling results.

As a basic example: While your point about the starting percentages is correct, the study lost partipicants over time. Group A (the 1k/month group) lost 33% of its participants by T3, and Group C (the 50/month comparison group) lost 38% of its participants.

The table you quote from the final study doesn't include the people who were lost, only those who filled out both surveys, T1 and T3. So using it to say they helped a greater percent of people is a bit weird.

They also don't tell you the table for T2 in the final report, you have to go look at the interim one.

The T2 data makes T1->T3 look much less impressive, and definitely doesn't seem to support some amazing gain for group 1.

As far as i can tell, the data looks even less impressive for your claim if you do t1->t2, and T2->t3, instead of just t1->t3 with only both included.

It would certainly be easier to tease apart the point you try to make if they reported the number of originally unhoused vs originally housed that were retained at each timepoint, but they don't.

So what am i missing? Why should I look at these results and think it is amazing?

(Also, I don't think i'd agree the main argument the author makes is based and refuted solely by the results of the denver study)

DannyBee commented on Electricity prices are climbing more than twice as fast as inflation   npr.org/2025/08/16/nx-s1-... · Posted by u/geox
danielmarkbruce · 7 days ago
Because public operation of infrastructure has often not gone well. And no matter who owns it, there is a cost of capital.
DannyBee · 6 days ago
It's gone fine. It's always fun to hear about the wonders of privatization where everyone conveniently ignores that the vast majority of private businesses fail miserably. Over 50 percent in five years. Mostly due to mismanagement of money, the thing they are supposed to be better at. The rate is even higher (80-90 percent) if we were to look at small businesses.

They do have better PR though.

DannyBee commented on Judge Blocks FTC Investigation of Media Matters   nytimes.com/2025/08/15/te... · Posted by u/duxup
SilverElfin · 8 days ago
How can a judge block what amounts to an anti trust investigation into Media Matters and the advertising companies? Calling it a first amendment violation seems like a stretch. When apartment owners signal pricing to each other is that also off limits because of the first amendment?
DannyBee · 8 days ago
The judge blocked a very broad CID. See page 10 for a list of just some of the documents requested.

I don't think you can argue in good faith that the vast majority of these have literally anything to do with antitrust.

DannyBee commented on Judge Blocks FTC Investigation of Media Matters   nytimes.com/2025/08/15/te... · Posted by u/duxup
ungreased0675 · 8 days ago
What tangible harms could come from an investigation? Legal fees?
DannyBee · 8 days ago
This was a challenge to a CID order.

So in this case, lots of docuemnts and a lot fees :)

You can look at page 10 to get a glance at some of the request, which would be ... amazingly onerous to comply with.

DannyBee commented on Justice Dept. Settles with Greystar to End Participation in Algorithmic Pricing   justice.gov/opa/pr/justic... · Posted by u/impish9208
MarkSweep · 13 days ago
I’m I reading this right that the settlement is just “don’t do that again”? Is it typical in antitrust settlements to not have sort of monetary punishment? Like if this were a class action settlement, they would have to pay back some amount of money to renters.
DannyBee · 13 days ago
Lawyer here - It varies whether there is monetary punishment, but sure, i'd say 75% of cases at leaset there is.

However, the damages are likely hard to calculate here - since it involves calculating and arguing about prevailing rental rates in a competitive market vs the actual market due to realpage, in a huge number of places. Greystar would have argued about every single finding you made, too.

Because of the novelty and complexity involved, Greystar could have tied this up for a decade arguing about that and appealing any results, i'm sure. On top of that, Greystar would argue all they did is share data with realpage and use realpage's results, so any loss is really attributable to realpage, not to them.

Greystar may also not have tons of money. Most of their deals are debt deals. The company is private, and while revenue is roughly known, profit isn't publicly known (AFAIK). So it's hard to say what fine they could afford. The DOJ knows, of course, just we don't know.

Finally, being a private firm that does what they do, my guess is they would play games and other things with any real fine to avoid having to pay it (bankruptcy, et al).

Overall - getting their cooperation is probably more valuable than arguing about damages for a decade and then watching greystar play games while losing the ability to go meaningfully after RealPage.

Obviously, i'm not trying to state any of this is ethically okay or that folks who were overcharged don't deserve their money back. I'm just trying to give you a dispassionate view of some of the decision making involved and why they may have chosen what they did.

Or at least, what would normally be involved. With the trump administration, who knows.

u/DannyBee

KarmaCake day30302June 21, 2011
About
Xoogler just enjoying life for a while after a long time in tech. I'm also an open source lawyer.

If there is anything i can help you with, feel free to poke me.

View Original