Readit News logoReadit News
noirscape commented on 10 Years of Let's Encrypt   letsencrypt.org/2025/12/0... · Posted by u/SGran
crapple8430 · 5 days ago
You can add it to your user CA store, but no app will trust it since it's treated differently from the system CA store, which you can't modify without root or building your own ROM. In effect it is out of reach for most normal users, as well as people using security focused ROMs like Graphene, when ironically it can improve security in transit in many cases.
noirscape · 5 days ago
It's technically possible to get any Android app to accept user CAs. Unfortunately it requires unpacking it with apktool, adding a networkconfigoverride to the XML assets and pointing the AndroidManifest.xml to use it. Then restitch the APK with apktool, use jarsigner/apksigner and finally use zipalign.

Doesn't need a custom ROM, but it's so goddamn annoying that you might as well not bother. I know how to do these things; most users won't and given the direction the big G is heading in with device freedom, it's not looking all that bright for this approach either.

noirscape commented on 'Source available' is not open source, and that's okay   dri.es/source-available-i... · Posted by u/geerlingguy
koolala · 5 days ago
I'm amazed how many people don't like open-source... Imagine the hellscape computers / the internet would be today if Linux was 'Source available'.
noirscape · 5 days ago
For a lot of developers, the current biggest failure of open source is the AWS/Azure/GCP problem. BigCloud has a tendency to just take well liked open source products, provide a hosted version of them and as a result they absolutely annihilate the market share of the entity that originally made the product (which usually made money by offering supported and hosted versions of the software). Effectively, for networked software (which is the overwhelming majority of software products these days) you might as well use something like BSD/MIT rather than any of the GPLs[0] because they practically have the same guarantees; it's just that the BSD/MIT licenses don't contain language that makes you think it does stuff it actually doesn't do. Non-networked software like kernels, drivers and most desktop software don't have this issue, so it doesn't apply.

Open source for that sort of product (which most of the big switches away from open source have been about) only further entrenches BigCloud's dominance over the ecosystem. It absolutely breaks the notion that you can run a profitable business on open source. BigCloud basically always wins that race even if they aren't cheaper because the company is using BigCloud already, so using their hosted version means cutting less yellow tape internally since the difficulty of getting people to agree on BigCloud is much lower compared to adding a new third party you have to work with.

The general response to this issue from the open source side tends to just be to accuse the original developers of being greedy/only wanting to use the ecosystem to springboard their own popularity.

---

I should also note that this generally doesn't apply to the fight between DHH and Mullenweg that's described in the OP. DHH just wants to kick a hornets nest and get attention now that Omarchy isn't the topic du jour anymore - no BigCloud (or for this case, shared hosting provider is probably more likely) is going to copy a random kanban tool written in Ruby on Rails. They're copying the actual high profile stuff like Redis, Terraform and whatever other examples you can recently think of that got screwed by BigClouds offering their services in that way (shared providers pretty much universally still use the classic AMP stack, which doesn't support a Ruby project, immunizing DHHs tool from that particular issue as well). Mullenweg by contrast does have to deal with Automattic not having a stranglehold on being a WordPress provider since the terms of his license weren't his to make to begin with; b3/cafelog was also under GPL and WordPress inherited that. He's been burned by FOSS, but it's also hard to say he was surprised by it, since WP is modified from another software product.

[0]: Including the AGPL, it doesn't actually do what you think it does.

noirscape commented on Valve reveals it’s the architect behind a push to bring Windows games to Arm   theverge.com/report/82065... · Posted by u/evolve2k
kasey_junk · 11 days ago
You hypothesis then is that there is not a _single_ public company that has a healthy relationship with its company? Not one, in the entire global public space?

When does this relationship with customers happen? Is it at the IPO? When they file the paperwork? When they contemplate going public for the first time? Or is it that any founder who might one day decide to contemplate going public was doomed to unhealthy customer relations from birth?

The obvious next thing we in society should do is abolish public equity as a concept as a customer protection mechanism?

noirscape · 11 days ago
It's not impossible to run a publicly owned company in the US that isn't insanely hostile towards it's customers or employees... it's just really damn difficult because of bad legal precedent.

Dodge v. Ford is basically the source of all these headaches; the Dodge Brothers owned shares in Ford. Ford refused to pay the dividends he had to pay to the Dodge Brothers, suspecting that they'd use the dividends to start their own car company (he wasn't wrong about that part). The Dodge Brothers sued Ford, upon which Fords defense for not paying out dividends was "I'm investing it in my employees" (an obvious lie, it was very blatantly about not wanting to pay out). The judge sided with the Dodge Brothers and the legal opinion included a remark that the primary purpose of a director is to produce profit to the shareholders.

That's basically become US business doctrine ever since, being twisted into the job of the director being to maximize profits to the shareholders. It's slightly bunk doctrine as far as I know; the actual precedent would mostly translate to "the shareholders can fire the directors if they think they don't do a good job" (since it can be argued that as long as any solid justification exists, producing profit for the shareholders can be assumed[0]; Dodge v. Ford was largely Ford refusing to follow his contracts with money that Dodge knew Ford had in the bank), but nobody in the upper areas of management wants to risk facing lawsuits from shareholders arguing that they made decisions that go against shareholder supremacy[1]. And so, the threats of legal consequences morph into the worst form of corporate ghoulishness that's so pervasive across every publicly traded company in the US. It's why short-term decision making dominates long-term planning for pretty much every public company.

[0]: This is called the "business judgement rule", where courts will broadly defer the judgement on if a business is ran competently or not to the executives of that business.

[1]: Tragically, just because it's bunk legal theory, doesn't change that the potential and disastrous consequences of lawsuits in the US are a very real thing.

noirscape commented on Advent of Code 2025   adventofcode.com/2025/abo... · Posted by u/vismit2000
noirscape · 15 days ago
Taking out the public leaderboard makes sense imo. Even when you don't consider the LLM problem, the public Leaderboard's design was never really suited for anyone outside of the very specific short list of (US) timezones where competing for a quick solution was every feasible.

One thing I do think would be interesting is to see solution rate per hour block. It'd give an indication of how popular advent of code is across the world.

noirscape commented on Disney Lost Roger Rabbit   pluralistic.net/2025/11/1... · Posted by u/leephillips
danaris · 21 days ago
The vast majority of money for any given copyrighted work comes within the first few years of its existence. (This is extra true for things like video games.)

Furthermore, current copyright terms are decades past the death of the creator.

You seem to be thinking of copyright purely in terms of vast media conglomerates, but it affects literally every work created by every human in the country. That includes these HN discussion posts!

Additionally, I find it hard to see how your second paragraph holds. If the amount of exclusive content a given entity holds affects their odds of being bought by a larger conglomerate, I would think it would be in the opposite direction: having more exclusive content would make them more likely to be a target for acquisition, so that the larger company could then hold all of that exclusively.

If everything older than, say, 35 years were suddenly in the public domain, available to be distributed by any of the distribution companies, and Hypothetical Media Corp had half the back catalogue that they used to, then surely that would make big conglomerates less interested in buying up Hypothetical Media Corp?

noirscape · 21 days ago
> Furthermore, current copyright terms are decades past the death of the creator.

It's important to recognize why this is the case - a lot of the hubbub around posthumous copyright comes from the fact that a large amount of classic literature often went unrecognized during an author's lifetime (a classic example is Moby Dick, which sold and reviewed poorly - Melville only made 1260$ from the book in total and his wife only made ~800$ from it in the remaining 8 years it remained under copyright after Melville died, even though it's hard to not imagine it on a literature list these days). Long copyright terms existed to ensure that the family of an author didn't lose out on any potential sales that would come much later. Even more recent works, like Lord of the Rings also heavily benefitted from posthumous copyright, as it allowed Tolkien's son to actually make the books into the modern classics they are today, through carefully curating the rereleases and additions to the work (the map of Middle Earth for instance was drawn by Tolkien's son.)

It's mostly a historic example though; Copyright pretty blatantly just isn't designed with the internet in mind. Personally I think an unconditional 50 years is the right timeline for copyright to end. No "life+50"; just 50.

50 years of copyright should be more than enough to get as much mileage out of a work as possible, without running into the current insanity where all of the modern worlds cultural touchstones are in the hands of a few megacorporations. For reference, 50 years means that everything before 1975 would no longer be under copyright today, which seems like a much fairer length to me. It also means that if you create something popular, you have roughly the entire duration of a person's working life (starting at 18-23, ending at 65-70) to make money from it.

noirscape commented on 210 IQ Is Not Enough   taylor.town/iq-not-enough... · Posted by u/surprisetalk
jmull · 24 days ago
I don't see the particularly useful/meaningful part here.

Who knows what you're referring to, but generally IQ tests measure general mental abilities on things society generally finds good. That's fine, but general education does the same in far more detail and comes with a robust achievement measurement (grades, and graduation/degrees).

IQ competes with other measures that exist anyway and comes up short.

noirscape · 24 days ago
Grades aren't necessarily an indicator on if a person comprehends the educational material. Someone can visibly under-perform on general tests, but when questioned in-person/made to do an exam still recite the educational material from the top of their head, apply it correctly and even take it in a new direction. Those are underachievers; they know what they can do, but for one reason or another, they simply refuse to show it (a pretty common cause tends to be just finding the general coursework to be demeaning or the teachers using the wrong education methods, so they don't put a lot of effort into it[0].) Give them coursework above their level, and they'll suddenly get acceptable/correct results.

IQ can be used somewhat reliably to identify if someone is an underachiever, or if they're legitimately struggling. That's what the tests are made and optimized for; they're designed to test how quickly a person can make the connection between two unrelated concepts. If they do it quick enough, they're probably underachieving compared to what they actually can do and it may be worth trying to give them more complicated material to see if they can actually handle it. (And conversely, if it turns out they're actually struggling, it may be worth dedicating more time to help them.)

That's the main use of it. Anything else you attach to IQ is a case of correlation not being causation, and anyone who thinks it's worth more than that is being silly. High/Low IQ correlates to very little besides a sort of general trend on how quickly you can recognize patterns (because of statistical anomaly rules, any score outside the 95th percentile is basically the same anyways and IQ scores are normalized every couple years; this is about as far as you can go with IQ - there's very little difference between 150/180/210 or whatever other high number you imagine).

noirscape commented on 210 IQ Is Not Enough   taylor.town/iq-not-enough... · Posted by u/surprisetalk
noirscape · 24 days ago
It keeps astounding me that people assign value to a score whose purpose was mainly intended to find outliers in the education system as being anything besides that.

Or to quote the late astrophysicist Stephen Hawking: "People who boast about their IQ are losers".

noirscape commented on Yt-dlp: External JavaScript runtime now required for full YouTube support   github.com/yt-dlp/yt-dlp/... · Posted by u/bertman
usrbinbash · a month ago
> It's absolutely insane to me how bad the user experience is with video nowadays

Has nothing to do with video per se. Normal embeddings, using the standard `<video>` element and no unnecessary JS nonsense, still work the same way they did in the 90s: Right click the video and download it, it's a media element like any other.

The reason why user experience is going to shite, is because turbocapitalism went to work on what was once The Internet, and is trying to turn it into a paywalled profit-machine.

noirscape · a month ago
The problem with a standard video element is that while it's mostly nice for the user, it tends to be pretty bad for the server operator. There's a ton of problems with browser video, beginning pretty much entirely with "what's the codec you're using". It sounds easy, but the unfortunate reality is that there's a billion different video codecs (and a heavy use of Hyrum's law/spec abuse on the codecs) and a browser only supports a tiny subset of them. Hosting video already at a basis requires transcoding the video to a different storage format; unlike a normal video file you can't just feed it to VLC and get playback, you're dealing with the terrible browser ecosystem.

Then once you've found a codec, the other problem immediately rears its head: video compression is pretty bad if you want to use a widely supported codec, even if for no other reason than the fact that people use non-mainstream browsers that can be years out of date. So you are now dealing with massive amounts of storage space and bandwidth that are effectively being eaten up by duplicated files, and that isn't cheap either. To give an estimate, under most VPS providers that aren't hyperscalers, a plain text document can be served to a couple million users without having to think about your bandwidth fees. Images are bigger, but not by enough to worry about it. 20 minutes of 1080p video is about 500mb under a well made codec that doesn't mangle the video beyond belief. That video is going to reach at most 40000 people before you burn through 20 terabytes of bandwidth (the Hetzner default amount) and in reality, probably less because some people might rewatch the thing. Hosting video is the point where your bandwidth bill will overtake your storage bill.

And that's before we get into other expected niceties like scrolling through a video while it's playing. Modern video players (the "JS nonsense" ones) can both buffer a video and jump to any point in the video, even if it's outside the buffer. That's not a guarantee with the HTML video element; your browser is probably just going to keep quietly downloading the file while you're watching it (eating into server operator cost) and scrolling ahead in the video will just freeze the output until it's done downloading up until that point.

It's easy to claim hosting video is simple, when in practice it's probably the single worst thing on the internet (well that and running your own mailserver, but that's not only because of technical difficulties). Part of YouTube being bad is just hyper capitalism, sure, but the more complicated techniques like HLS/DASH pretty much entirely exist because hosting video is so expensive and "preventing your bandwidth bill from exploding" is really important. That's also why there's no real competition to YouTube; the metrics of hosting video only make sense if you have a Google amount of money and datacenters to throw at the problem, or don't care about your finances in the first place.

noirscape commented on What the hell have you built   wthhyb.sacha.house/... · Posted by u/sachahjkl
wewewedxfgdf · a month ago
"Maybe Redis for caching".

Really that's going way too far - you do NOT need Redis for caching. Just put it in Postgres. Why go to this much trouble to put people in their place for over engineering then concede "maybe Redis for caching" when this is absolutely something you can do in Postgres. The author clearly cannot stop their own inner desire for overengineering.

noirscape · a month ago
A cache can help even for small stuff if there's something time-consuming to do on a small server.

Redis/valkey is definitely overkill though. A slightly modified memcached config (only so it accepts larger keys; server responses larger than 1MB aren't always avoidable) is a far simpler solution that provides 99% of what you need in practice. Unlike redis/valkey, it's also explicitly a volatile cache that can't do persistence, meaning you are disincentivized from bad software design patterns where the cache becomes state your application assumes any level of consistency of (including it's existence). If you aren't serving millions of users, stateful cache is a pattern best avoided.

DB caches aren't very good mostly because of speed; they have to read from the filesystem (and have network overhead), while a cache reads from memory and can often just live on the same server as the rest of the service.

noirscape commented on AI Slop vs. OSS Security   devansh.bearblog.dev/ai-s... · Posted by u/mooreds
dvt · a month ago
> Requiring technical evidence such as screencasts showing reproducibility, integration or unit tests demonstrating the fault, or complete reproduction steps with logs and source code makes it much harder to submit slop.

If this isn't already a requirement, I'm not sure I understand what even non-AI-generated reports look like. Isn't the bare-minimum of CVE reporting a minimally reproducible example? Like, even if you find some function, that for example doesn't do bounds-checking on some array, you can trivially write some unit testing code that's able to break it.

noirscape · a month ago
The problem that is that a lot of CVEs often don't represent "real" vulnerabilities, but merely theoretical ones that could hypothetically be combined to make a real exploit.

Regex exploitation is the forever example to bring up here, as it's generally the main reason that "autofail the CI system the moment an auditing command fails" doesn't work on certain codebases. The reason this happens is because it's trivial to make a string that can waste significant resources to try and do a regex match against, and the moment you have a function that accepts a user-supplied regex pattern, that's suddenly an exploit... which gets a CVE. A lot of projects then have CVEs filed against them because internal functions rely on Regex calls as arguments, even if they're in code the user is flat-out never going to be able interact with (ie. Several dozen layers deep in framework soup there's a regex call somewhere, in a way the user won't be able to access unless a developer several layers up starts breaking the framework they're using in really weird ways on purpose).

The CVE system is just completely broken and barely serves as an indicator of much of anything really. The approval system from what I can tell favors acceptance over rejection, since the people reviewing the initial CVE filing aren't the same people that actively investigate if the CVE is bogus or not and the incentive for the CVE system is literally to encourage companies to give a shit about software security (at the same time, this fact is also often exploited to create beg bounties). CVEs have been filed against software for what amounts to "a computer allows a user to do things on it" even before AI slop made everything worse; the system was questionable in quality 7 years ago at the very least, and is even worse these days.

The only indicator it really gives is that a real security exploit can feel more legitimate if it gets a CVE assigned to it.

u/noirscape

KarmaCake day2969September 17, 2018View Original