Readit News logoReadit News
boolemancer commented on Docker limits unauthenticated pulls to 10/HR/IP from Docker Hub, from March 1   docs.docker.com/docker-hu... · Posted by u/todsacerdoti
solatic · 7 months ago
Can't believe the sense of entitlement in this thread. I guess people think bandwidth grows on trees.

For residential usage, unless you're in an apartment tower where all your neighbors are software engineers and you're all behind a CGNAT, you can still do a pull here and there for learning and other hobbyist purposes, which for Docker is a marketing expense to encourage uptake in commercial settings.

If you're in an office, you have an employer, and you're using the registry for commercial purposes, you should be paying to help keep your dependencies running. If you don't expect your power plant to give you electricity for free, why would you expect a commercial company to give you containers for free?

boolemancer · 7 months ago
There's already a rate limit on pulls. All this does is make that rate limit more inconvenient by making it hourly instead of allowing you to amortize it over 6 hours.

10 per hour is slightly lower than 100 per 6 hours, but not in any meaningful way from a bandwidth perspective, especially since image size isn't factored into these rate limits in any way.

If bandwidth is the real concern, why change to a more inconvenient time period for the rate limit rather than just lowering the existing rate limit to 60 per 6 hours?

boolemancer commented on Firefox removes "do not track" feature support   windowsreport.com/mozilla... · Posted by u/mossTechnician
darkhorse222 · 9 months ago
I doubt it. In my experience those that block ads feel entitled to the content without payment of any kind. They see ads as an intrusion rather than a fair exchange. No, I don't see you turning it back on regardless of how things go.

The average user doesn't even recognize that running a website literally cost electricity that must be paid for. Who pays for it? Who will carry the boats?

boolemancer · 9 months ago
> The average user doesn't even recognize that running a website literally cost electricity that must be paid for. Who pays for it? Who will carry the boats?

Running a retail store also has costs associated with it, including, yes, electricity.

Yet if I walk into a store and leave without buying anything, do I feel like I owe the store owner anything?

No. That's not how that works, nor is that how it should work.

boolemancer commented on All the data can be yours: reverse engineering APIs   jero.zone/posts/reverse-e... · Posted by u/noleary
AlienRobot · 10 months ago
The only reason why "another client" can exist is due to limitations of the Internet itself.

If you could ensure that the web server can only be accessed by your client, you would do that, but there is no way to do this that can't be reverse-engineered.

Essentially your argument is that just because a door is open that means you're allowed to enter inside, and I don't believe that makes any sense.

boolemancer · 10 months ago
It's not a limitation of the internet, it's a fundamental property of communication.

Imagine trying to validate that all letters sent to your company are written by special company-provided typewriters and you would run into the same fundamental limits.

Whenever you design any client/server architecture, the first rule should always be "never trust the client," for that very reason.

Rather than trying to work around that rule, put your effort into ensuring that the system is correct and resilient even in the face of malicious clients.

boolemancer commented on All the data can be yours: reverse engineering APIs   jero.zone/posts/reverse-e... · Posted by u/noleary
AlienRobot · 10 months ago
"if you can't distinguish the reverse engineered traffic from the traffic through your own app in order to block it, then what harm is the traffic doing?"

If you really believe this you'll use a custom user agent instead of spoofing Chrome. :-)

Some websites use HTTP referer to block traffic. Ask yourself if any reverse engineer would be stopped by what is obviously the website telling you not to access an endpoint.

I'll add that end users don't have complete information about the website. They can't know how many resources a website has to deal to reverse engineering (webmasters can't just play cat and mouse with you just because you're wasting their money) nor do they know the cost of an endpoint. I mean, most tech inclined use ad blockers when it's obvious 90% of the websites pay the cost of their endpoints by showing ads, so I doubt they would respect anything more subtle than that.

boolemancer · 10 months ago
If an endpoint costs a lot to run, implement rate limits and return 429 status codes so callers know that they're calling too often.

That endpoint will be expensive regardless of whether it's your own app or a third party that's calling it too often, so design it with that in mind.

Your app isn't special, it's just another client. Treat it that way.

boolemancer commented on All the data can be yours: reverse engineering APIs   jero.zone/posts/reverse-e... · Posted by u/noleary
Eikon · 10 months ago
This approach is generally seen as unwanted by website owners (it's worth noting that automated API clients are distinct from regular user agents). As a “reverse engineer”, you have no idea how expensive or not an endpoint is to process a request.

Instead, I'd recommend reaching out to the website owners directly to discuss your API needs - they're often interested in hearing about potential integrations and use cases.

If you don't receive a response, proceeding with unauthorized API usage is mostly abusive and poor internet citizenship.

boolemancer · 10 months ago
In my personal view, this seems a little overbearing.

If you expose an API, and you want to tell a user that they are "unauthorized" to use it, it should return a 401 status code so that the caller knows they're unauthorized.

If you can't do that because their traffic looks like normal usage of the API by your web app, then I question why their usage is problematic for you.

At the end of the day, you don't get to control what 'browser' the user uses to interact with your service. Sure, it might be Chrome, but it just as easily might be Firefox, or Lynx, or something the user built from scratch, or someone manually typing out HTTP requests in netcat, or, in this case, someone building a custom client for your specific service.

If you host a web server, it's on you to remember that and design accordingly, not on the user to limit how they use your service.

boolemancer commented on Regarding our Cease and Desist letter to Automattic   wpfusion.com/business/reg... · Posted by u/kemayo
zugi · a year ago
Single English words cannot be trademarked. However, if you string two of them together, and demonstrate that you are actively using the phrase in commerce, you can get the phrase trademarked for use in a particular domain, e.g. computer software.

Three words is even better.

Note that a tractor manufacturer could still trademark "Active Custom Fields" for agricultural equipment, because it would not be confusingly similar to the "Active Custom Fields" software.

Also trademarks have to be renewed every 5-10 years and you must show that you are actively using it.

boolemancer · a year ago
> Single English words cannot be trademarked.

Um... Apple? Shell? Alphabet? Chevron? Target? Caterpillar? Oracle? Orange?

boolemancer commented on The optimised version of 7-Zip can't be built from source   pileofhacks.dev/post/the-... · Posted by u/todsacerdoti
boolemancer · a year ago
If all that's missing is 'a nasm compatible assembler', did they try just swapping it out for nasm, which seems to have a readily available alpine package?

https://pkgs.alpinelinux.org/package/edge/main/x86/nasm

boolemancer commented on Ryujinx (Nintendo Switch emulator) has been removed from GitHub   github.com/Ryujinx/Ryujin... · Posted by u/jsheard
naikrovek · a year ago
If people used emulators for homebrew there wouldn’t be much of a fuss about it. But they don’t, they use emulators for piracy.

It doesn’t matter if it has legitimate uses if 50%+ of the information online is about piracy and game dumping.

Nintendo is gonna care and they’re gonna try to stop these things, so long as their primary use is piracy. It doesn’t matter that there are legitimate and legal use cases. There are zero people writing homebrew of any real value for any console platform newer than the SNES as far as I’m aware. There are lots and lots of toy applications in homebrew stores but nothing serious. LOTS of detailed and useful info about how to pirate games, though.

boolemancer · a year ago
> and game dumping.

Your argument is that legally purchasing a game and playing that in an emulator is piracy?

boolemancer commented on Git-absorb: Git commit –fixup, but automatic   github.com/tummychow/git-... · Posted by u/striking
imiric · a year ago
> The PR is the thing that gets signed-off on, and the thing that goes through the CI build/tests, so why wouldn't that be the thing kept as an atomic unit?

Because it often isn't. I don't know about your experience, but in all the teams I've worked in throughout my career the discipline to keep PRs atomic is almost never maintained, and sometimes just doesn't make sense. Sometimes you start working on a change, but spot an issue that is either too trivial to go through the PR/review process, or closely related to the work you started but worthy of a separate commit. Other times large PRs are unavoidable, especially for refactorings, where you want to propose a larger change but the history of the progress is valuable.

I find conventional commits helpful when deciding what makes an atomic change. By forcing a commit to be of a single type (feature, fix, refactor, etc.) it's easier to determine what belongs together and what not. But a PR can contain different commit types with related changes, and squashing them all when merging doesn't make the PR itself atomic.

> I don't think I've ever cared about the context for a specific commit within a PR once the PR has been merged. What kind of information do you expect to get out of it?

Oh, plenty. For one, when looking at `git blame` to determine why a change was made, I hope to find this information in the commit message. This is what commit messages are for anyway. If all commits have this information, following the history of a set of changes becomes much easier. This is helpful not just during code reviews, but after the merge as well, for any new members of the team trying to understand the codebase, or even the author themself in the future.

boolemancer · a year ago
> Because it often isn't. I don't know about your experience, but in all the teams I've worked in throughout my career the discipline to keep PRs atomic is almost never maintained, and sometimes just doesn't make sense. Sometimes you start working on a change, but spot an issue that is either too trivial to go through the PR/review process, or closely related to the work you started but worthy of a separate commit. Other times large PRs are unavoidable, especially for refactorings, where you want to propose a larger change but the history of the progress is valuable.

In my experience at least, PRs are atomic in that they always leave main in a "good state" (where good is pretty loosely defined as 'the tests had to pass once').

Sometimes you might make a few small changes in a PR, but they still go through a review. If they're too big, you might ask someone to split it out into two PRs.

Obviously special cases exist for things like large refactoring, but those should be rare and can be handled on a case by case basis.

But regardless, even if a PR has multiple small changes, I wouldn't revert or cherry-pick just part of it. Just do the whole thing or not at all.

> Oh, plenty. For one, when looking at `git blame` to determine why a change was made, I hope to find this information in the commit message. This is what commit messages are for anyway. If all commits have this information, following the history of a set of changes becomes much easier. This is helpful not just during code reviews, but after the merge as well, for any new members of the team trying to understand the codebase, or even the author themself in the future.

Yeah but the context for `git blame` is still there when doing a squash merge, and the commit message should still be relevant and useful.

My point isn't that history isn't useful, it's that the specific individual commits that make up a PR don't provide more useful context than the combined PR commit itself does.

I don't need to know that a typo was fixed in iteration 5 of feedback in the PR that was introduced in iteration 3. It's not relevant once the PR is merged.

boolemancer commented on Git-absorb: Git commit –fixup, but automatic   github.com/tummychow/git-... · Posted by u/striking
imiric · a year ago
Every team is free to choose what works best for them, but IMO always squashing PRs is not a good strategy. Sometimes you do want to preserve the change history, particularly if the PR does more than a single atomic change, which in practice is very common. There shouldn't be a static merge type preference at all, and this should be chosen on a case-by-case basis.

At the risk of sounding judgemental, I think this preference for always squashing PRs comes from a place of either not understanding atomic commits, not caring about the benefits of them, or just choosing to be lazy. In any case, the loss of history inevitably comes at a cost of making reverting and cherry-picking changes more difficult later, making `git bisect` pretty much worthless, and losing the context of why a change was made.

boolemancer · a year ago
> At the risk of sounding judgemental, I think this preference for always squashing PRs comes from a place of either not understanding atomic commits, not caring about the benefits of them, or just choosing to be lazy. In any case, the loss of history inevitably comes at a cost of making reverting and cherry-picking changes more difficult later, as well as losing the context of why a change was made.

1) Why are you ever reverting/cherry-picking at a more granular level than an entire PR anyway? The PR is the thing that gets signed-off on, and the thing that goes through the CI build/tests, so why wouldn't that be the thing kept as an atomic unit?

2) I don't think I've ever cared about the context for a specific commit within a PR once the PR has been merged. What kind of information do you expect to get out of it?

Edit: How does it remove the context for a change or make `git bisect` useless? How big are your PRs that you can't get enough context from finding the PR commit to know why a particular change was made?

u/boolemancer

KarmaCake day304February 26, 2019View Original