Readit News logoReadit News
remich commented on The web does not need gatekeepers: Cloudflare’s new “signed agents” pitch   positiveblue.substack.com... · Posted by u/positiveblue
lucb1e · 2 days ago
You can assume it's the USA and that I'm just dead wrong, but the third word of my profile specifies where I'm from and you'd find that this Dutch constitution matches the comment's contents

Equal protection is indeed not the same as equal treatment. No, it really does say that everyone shall be treated equally so long as the circumstances are equal (gelijke behandeling in gelijke gevallen)

remich · 2 days ago
I didn't assume, that's why I started my comment with "if by what you mean." Good to know that you were referencing a different place, but it's unrealistic to expect people to delve into your account bio to understand what you intended by "our constitution," especially when the parent comment also contained no geographic or cultural references. Perhaps you know the parent commenter and know that they share your geography? If so, that would also have been helpful context.

As an aside, I'm curious by how that language in the Dutch constitution actually works in practice. Is it just a game of distinguishing between situations or people to excuse disparate conduct? It seems like it would be unworkable if interpreted literally.

remich commented on The web does not need gatekeepers: Cloudflare’s new “signed agents” pitch   positiveblue.substack.com... · Posted by u/positiveblue
lucb1e · 2 days ago
The first article of our constitution says people shall be treated equally in equal situations. I presume that most countries have similar clauses but, beyond legalese, it's also simply in line with my ethics to treat everyone equally

There are people behind those connection requests. I don't try to guess on my server who is a bot and who is not; I'll make mistakes and probably bias against people who use uncommon setups (those needing accessibility aids or using e.g. experimental software that improves some aspect like privacy or functionality)

Sure, I have rights as a website owner. I can take the whole thing offline; I can block every 5th request; I can allow each /16 block to make 1000 requests per day; I can accept requests only from clients that have a Firefox user agent string. So long as it's equally applied to everyone and it's not based on a prohibited category such as gender or religious conviction, I am free to decide on such cuts and I'd encourage everyone to apply a policy that they believe is fair

Cloudflare and its competitors, as far as I can tell, block arbitrary subgroups of people based on secret criteria. It does not appear to be applied fairly, such as allowing everyone to make the same number of requests per unit time. I'm probably bothered even more because I happen to be among the blocked subgroup regularly (but far from all the time, just little enough to feel the pain)

remich · 2 days ago
If by "our constitution" you mean the U.S. Constitution then no, it says nothing of the sort. The first article of the U.S. Constitution concerns the organization of the legislative branch. You may be referencing the Equal Protection and Due Process clauses, in the Fifth and Fourteenth amendments, but neither of those applies in this situation either since there are no laws or governmental actions at issue here, and random sites on the internet are not universally considered to be public accommodations. Even in the ADA context, the law isn't actually clear, since websites aren't specified anywhere in the text at the federal level and there's no SCOTUS precedent on point.

Some states are more stringent with their own disability regulations or state constitutions, but no state anywhere in the U.S. has a law that says every visitor to a website has to be treated equally.

remich commented on Two narratives about AI   calnewport.com/no-one-kno... · Posted by u/RickJWagner
fleebee · a month ago
The analogy is pretty generous towards LLMs. I like Eevee's response to it in her blog post[1]:

>What I do know is that a table saw quickly cuts straight lines. That is the thing it does. It doesn’t do Whatever. It doesn’t sometimes cut wavy lines and sometimes glue pieces together instead. It doesn’t roll some dice and guess what shape of cut you are statistically likely to want based on an extensive database of previous cuts. It cuts a straight f*cking line.

>If I were a carpenter, and my colleagues got really into this new thing where you just chuck 2×4s at a spinning whirling mass of blades until a chair comes out the other side… you know, I just might want to switch careers.

[1]: https://eev.ee/blog/2025/07/03/the-rise-of-whatever/

remich · a month ago
If the original framing was too generous, the response is at least as ungenerous. Table saws aren't deterministic tools either, and anyone who has used one for more than a minute can tell you that getting it to consistently cut the straight line you want takes skill.
remich commented on Two narratives about AI   calnewport.com/no-one-kno... · Posted by u/RickJWagner
zozbot234 · a month ago
AI is glorified autocomplete. Look at what happens when AI tries its hand at writing legal briefs, and you'll understand why it cannot possibly replace software developers.
remich · a month ago
As with all uses of current AI (meaning generative AI LLMs) context is everything. I say this as a person who is both a lawyer and a software engineer. It is not surprising that the general purpose models wouldn't be great at writing a legal brief -- the training data likely doesn't contain much of the relevant case law because while it is theoretically publicly available, practicing attorneys universally use proprietary databases like Lexis and WestLaw to surface it. The alternative is spelunking through public court websites that look like they were designed in the 90s or even having to pay for case records like on PACER.

At the same time, even if you have access to proper context like if your model can engage with Lexis or WestLaw via tool-use, surfacing appropriate matches from caselaw requires more than just word/token matching. LLMs are statistical models that tend to reduce down to the most likely answer. But, typically, in the context of a legal brief, a lawyer isn't attempting to find the most likely answer or even the objectively correct answer, they are trying to find relevant precedent with which they can make an argument that supports the position they are trying to advance. An LLM by its nature can't do that without help.

Where you're right, then, is that law and software engineering have a lot in common when it comes to how effective baseline LLM models are. Where you're wrong is in calling them glorified auto-complete.

In the hands of a novice they will, yes, generate plausible but mostly incorrect or technically correct but unusable in some way answers. Properly configured with access to appropriate context in the hands of an expert who understands how to communicate what they want the tool to give them? Oh that's quite a different matter.

remich commented on I tried vibe coding in BASIC and it didn't go well   goto10retro.com/p/vibe-co... · Posted by u/ibobev
poniko · a month ago
Yes and if you work with a plarform that has been arround for long time like .net you will most definitely get a mix of really outdated deprecated code mixed with the latest features.
remich · a month ago
I recommend the context7 MCP tool for this exact purpose. I've been trying to really push agents lately at work to see where they fall down and whether better context can fix it.

As a test recently I instructed an agent using Claude to create a new MCP server in Elixir based on some code I provided that was written in Python. I know that, relatively speaking, Python is over-represented in training data and Elixir is under-represented. So, when I asked the agent to begin by creating its plan, I told it to reference current Elixir/Phoenix/etc documentation using context7 and to search the web using Kagi Search MCP for best practices on implementing MCP servers in Elixir.

It was very interesting to watch how the initially generated plan evolved after using these tools and how after using the tools the model identified an SDK I wasn't even aware of that perfectly fit the purpose (Hermes-mcp).

remich commented on Show HN: Tritium – The Legal IDE in Rust   tritium.legal/preview... · Posted by u/piker
remich · 3 months ago
As someone who is a (current) software engineer and (former) lawyer I find this interesting. Not sure if I'm willing to bet on big uptake, though, unless it was through an acquisition by one of the big e-discovery companies.
remich commented on My AI skeptic friends are all nuts   fly.io/blog/youre-all-nut... · Posted by u/tabletcorry
tpmoney · 3 months ago
> I can tell you that we’re already seeing a glut of security issues being explained by devs as “I asked copilot if it was secure and it said it was fine so I committed it”.

And as with Google and Stack Overflow before, the Sr Devs will smack the wrists of the Jr's that commit untested and unverified code, or said Jr's will learn not to do those things when they're woken up at 2 AM for an outage.

remich · 3 months ago
That's assuming the business still employs those Sr Devs so they can do the wrist smacking.

To be clear, I think any business that dumps experienced devs in favor of cheaper vibe-coding mids and juniors would be making a foolish mistake, but something being foolish has rarely stopped business types from trying.

remich commented on I'd rather read the prompt   claytonwramsey.com/blog/p... · Posted by u/claytonwramsey
WatchDog · 4 months ago
When using Claude Sonnet 3.7 for coding, I often find that constraints I add to the prompt, end up producing unintended side effects.

Some examples:

- "Don't include pointless comments." - The model doesn't keep track of what it's doing as well, I generally just do another pass after it writes the code to simplify things.

- "Keep things simple" - The model cuts corners(often unnecessarily) on things like type safety.

- "Allow exceptions to bubble up" - Claude deletes existing error handling logic. I found that Claude seems to prefer just swallowing errors and adding some logging, instead of fixing the underlying cause of the error, but adding this to the prompt just caused it to remove the error handling that I had added myself.

remich · 4 months ago
The unfortunate implication to this is that many codebases Claude has been trained on just choose not to handle errors...
remich commented on "AI-first" is the new Return To Office   anildash.com//2025/04/19/... · Posted by u/LorenDB
oddthink · 4 months ago
There was a 1-5 Likert scale self-rating on "leveraging AI" and a free-text box. I rambled about using claude code to help summarize my daily notes, cursor for implementation, using the chatgpt ui for broad questions (what happened to internal project X, how do I configure airflow again, etc.), then experiments with the find-the-right-table-for-you SQL generator. That seemed like about the level folks were going for.

There are some people who are really into it. The sql-generator's great for PMs; ops are experimenting with moderation triage. I personally have mixed feelings, but I'll futz with it on company time (and api $) to see if I can get it to do something useful. It'll mess up tensor alignment, but I can fix that.

So, yes, it was in the performance review. No, it wasn't a big deal. Yes, it seems to me like a reasonable nudge to get over the activation energy of learning to use the thing.

remich · 4 months ago
Tangental - what is the SQL tool you're referring to? Is that internal?
remich commented on "AI-first" is the new Return To Office   anildash.com//2025/04/19/... · Posted by u/LorenDB
guywithahat · 4 months ago
'Why would it be "wild"? This is a commonly accepted compromise of working from home
remich · 4 months ago
No, it's not.

u/remich

KarmaCake day98April 21, 2023View Original