Readit News logoReadit News
hyperpape commented on Robots.txt is a suicide note (2011)   wiki.archiveteam.org/inde... · Posted by u/rafram
rglover · 6 days ago
I see old stuff like this and it starts to become clear why the web is in tatters today. It may not be respected, but unless you have a really silly config (I'm hard-pressed to even guess what you could do short of a weird redirect loop), it won't be doing any harm.

> What this situation does, in fact, is cause many more problems than it solves - catastrophic failures on a website are ensured total destruction with the addition of ROBOTS.TXT.

Of course an archival pedant [1] will tell you it's a bad idea (because it makes their archival process less effective)—but this is one of those "maybe you should think for yourself and not just implement what some rando says on the internet" moments.

If you're using version control, running backups, and not treating your production env like a home computer (i.e., you're aware of the ephemeral nature of a disk on a VPS), you're fine.

[1] Archivists are great (and should be supported), but when you turn it into a crusade, you get foolish, generalized takes like this wiki.

hyperpape · 6 days ago
Regarding silly configurations: https://danluu.com/googlebot-monopoly/.
hyperpape commented on The Best Line Length   blog.glyph.im/2025/08/the... · Posted by u/zdw
hyperpape · 12 days ago
> There has been a surprising amount of scientific research around this issue

This article includes a throwaway link to the wikipedia page at the end of that quote. I recommend reading the relevant section (https://en.wikipedia.org/wiki/Line_length#Electronic_text), because it's pretty limited. There is really no way to tell if it (or Glyph) are accurately summarizing the research.

hyperpape commented on Try and   ygdp.yale.edu/phenomena/t... · Posted by u/treetalker
__MatrixMan__ · 14 days ago
I would only say "try and" if I thought it was likely that I'd at least make some progress towards the goal.

If I expected failure, I'd instead say "try to" fix it.

hyperpape · 14 days ago
Maybe, but even if true, it's still very clearly different from what the parent said.

To me, "I"m gonna try and fix it before I buy a new one, but that's probably what I'm gonna have to do" is a fine sentence.

hyperpape commented on Try and   ygdp.yale.edu/phenomena/t... · Posted by u/treetalker
onionisafruit · 14 days ago
You can also interpret the Dr Dre quote an abbreviation of, “I’m gonna try (to change the course of hip hop again) and change the course of hip hop again.”

In this form “try and” means you will try to do something and that you will succeed. Some of the articles tests make more sense in this light; Of course you wouldn’t reorder the trying and the succeeding because that’s the order the events will happen.

This ignores the fact that “try and” developed concurrently with “try to” and possibly before. So it wasn’t originally an abbreviation for a phrase that was yet to be established.

hyperpape · 14 days ago
That's not what "try and" means though. It's perfectly fine to say "I'm gonna try and fix this" when you don't know if you can fix it.

(Source: I say that shit all the time).

hyperpape commented on We shouldn't have needed lockfiles   tonsky.me/blog/lockfiles/... · Posted by u/tobr
yawaramin · 18 days ago
How do lockfiles solve this problem? You would still have dependency-upgrade tickets and whack-a-mole, no? Or do you just never upgrade anything?
hyperpape · 18 days ago
I think the difference is that since libraries do not specify version ranges, you must manually override their choices to find a compatible set of dependencies.

The solution is version ranges, but this then necessitates lockfiles, to avoid the problem of uncontrolled upgrades.

That said, there's an option that uses version ranges, and avoids nondeterminism without lockfiles: https://matklad.github.io/2024/12/24/minimal-version-selecti....

Note: maven technically allows version ranges, but they're rarely used.

hyperpape commented on We shouldn't have needed lockfiles   tonsky.me/blog/lockfiles/... · Posted by u/tobr
potetm · 18 days ago
The point isn't, "There are zero problems with maven. It solves all problems perfectly."

The point is, "You don't need lockfiles."

And that much is true.

(Miss you on twitter btw. Come back!)

hyperpape · 18 days ago
I think Maven's approach is functionally lock-files with worse ergonomics. You can only use the dependency from the libraries you use, but you're waiting for those libraries to update.

As an escape hatch, you end up doing a lot of exclusions and overrides, basically creating a lockfile smeared over your pom.

P.S. Sadly, I think enough people have left Twitter that it's never going to be what it was again.

hyperpape commented on We shouldn't have needed lockfiles   tonsky.me/blog/lockfiles/... · Posted by u/tobr
hyperpape · 18 days ago
> But if you want an existence proof: Maven. The Java library ecosystem has been going strong for 20 years, and during that time not once have we needed a lockfile. And we are pulling hundreds of libraries just to log two lines of text, so it is actively used at scale.

Maven, by default, does not check your transitive dependencies for version conflicts. To do that, you need a frustrating plugin that produces much worse error messages than NPM does: https://ourcraft.wordpress.com/2016/08/22/how-to-read-maven-....

How does Maven resolve dependencies when two libraries pull in different versions? It does something insane. https://maven.apache.org/guides/introduction/introduction-to....

Do not pretend, for even half a second, that dependency resolution is not hell in maven (though I do like that packages are namespaced by creators, npm shoulda stolen that).

hyperpape commented on Corporation for Public Broadcasting ceasing operations   cpb.org/pressroom/Corpora... · Posted by u/coloneltcb
tptacek · 23 days ago
This is a giant thread full of people lamenting the demise of public broadcasting so it seems like someone should write the comment that points out that CPB doesn't do PBS programming. They don't develop content. They're a grantmaking organization that manages the distribution of the congressional PBS appropriation.

The actual PBS and NPR shows you're familiar with are generally developed and produced privately, and then purchased by local PBS stations (streaming access to PBS content runs through "Passport", which is a mechanism for getting people to donate to their local PBS station even while consuming that content on the Internet). This (and other streaming things like it) is how most people actually consume this content in 2025. If your local PBS affiliate vanishes, you as a viewer are not going to lose Masterpiece Theater or Nova, because you almost certainly weren't watching those shows on linear television anyways.

The cuts are bad, I just want to make sure people understand what CPB ceasing operations actually means.

hyperpape · 23 days ago
This is useful, though it leaves open the question of what it means in practice that the grant-making organization is disappearing.
hyperpape commented on How long before superintelligence? (1997)   nickbostrom.com/superinte... · Posted by u/jxmorris12
lukeschlather · 24 days ago
No, it's about imitation, not simulation. The point is defining how large of a computer you would need to achieve similar performance to the human brain on "intelligence" tasks. The comparison to the human brain is because we know human brains can do these kinds of reasoning and motor tasks, so that helps us set a lower bound on how much computing power is necessary, but it doesn't presume we're going to simulate a human brain, that's just stated because it might be one way we could do it.

But still I think you're not engaging with the article properly - it doesn't say we will, it just talks about how much computing power you might need. And I think within the paper it suggests we don't have enough computing power yet, but it doesn't seem like you read deeply enough to engage with that conversation.

hyperpape · 24 days ago
You're right to distinguish imitation from simulation. That's a good distinction and I think the paper is discussing imitation--using similar learning algorithms to what the brain uses, fed with realistic data from input devices. But my point still stands with imitation.

> This paper outlines the case for believing that we will have superhuman artificial intelligence within the first third of the next century. It looks at different estimates of the processing power of the human brain; how long it will take until computer hardware achieve a similar performance; ways of creating the software through bottom-up approaches like the one used by biological brains; how difficult it will be for neuroscience figure out enough about how brains work to make this approach work; and how fast we can expect superintelligence to be developed once there is human-level artificial intelligence.

The paper very clearly suggests an estimate of the required hardware power for a particular strategy of imitating the brain. And it very clearly predicts we will achieve superintelligence by 2033.

If that strategy is a non-starter, which it is for the foreseeable future, then the hardware estimate is irrelevant, because the strategies we have available to us may require orders of magnitude more computing power (or even may simply fail to work with any amount of computing power).

hyperpape commented on How long before superintelligence? (1997)   nickbostrom.com/superinte... · Posted by u/jxmorris12
lukeschlather · 24 days ago
The predictions in this paper are 100% correct. The author doesn't predict we would have ASI by now. They accurately predict that Moore's law would likely start to break down by 2012, and they also accurately predicted that EUV will allow further scaling beyond that barrier but that things will get harder. You may think LLMs are nothing like "real" AI but I'm curious what you think about the arguments in this paper and what sort of hardware is required for a "real" AI, if a "real" AI does not require hardware with in the neighborhood of 10^14 and 10^17 operations per second.

Whether or not LLMs are the correct algorithm, the hardware question is much more straightforward and that's what this paper is about.

hyperpape · 24 days ago
The entire discussion in the software section is about simulating the brain.

> Creating superintelligence through imitating the functioning of the human brain requires two more things in addition to appropriate learning rules (and sufficiently powerful hardware): it requires having an adequate initial architecture and providing a rich flux of sensory input.

> The latter prerequisite is easily provided even with present technology. Using video cameras, microphones and tactile sensors, it is possible to ensure a steady flow of real-world information to the artificial neural network. An interactive element could be arranged by connecting the system to robot limbs and a speaker.

> Developing an adequate initial network structure is a more serious problem. It might turn out to be necessary to do a considerable amount of hand-coding in order to get the cortical architecture right. In biological organisms, the brain does not start out at birth as a homogenous tabula rasa; it has an initial structure that is coded genetically. Neuroscience cannot, at its present stage, say exactly what this structure is or how much of it needs to be preserved in a simulation that is eventually to match the cognitive competencies of a human adult. One way for it to be unexpectedly difficult to achieve human-level AI through the neural network approach would be if it turned out that the human brain relies on a colossal amount of genetic hardwiring, so that each cognitive function depends on a unique and hopelessly complicated inborn architecture, acquired over aeons in the evolutionary learning process of our species.

u/hyperpape

KarmaCake day9138July 16, 2012
About
Writing at https://justinblank.com, code at https://github.com/hyperpape

If anything I've written on this site seems interesting, or confusing, or you think I'd be interested in something you've written/read, please let me know: hn@justinblank.com.

View Original