Readit News logoReadit News
userbinator · 4 years ago
Microsoft is probably one of the worst offenders, especially in the past few years. It seems like they're actively destroying documentation and making it hard to find important information, so much that I often use archive.org instead.
lelandfe · 4 years ago
Apple may be worse. They move documentation to the Documentation Archive and don't replace it. The Archive is a giant mass of "outdated", no longer updated documents, each assigned to 1 category. The Archive only has a title search now; full text search broke years ago.

All documentation on Help Books was archived, for instance. It's been 7 years since they've seen an update and they now contain inaccuracies – but there are no other official guides. Check out that UI: https://developer.apple.com/library/archive/documentation/Ca...

This is a technology that is still used. Nearly all of Apple's own apps have Help Books, including new ones like Shortcuts. Yet they have absolutely no official documentation on using that technology.

seba_dos1 · 4 years ago
> Check out that UI

Off-topic, but damn - tune down that candyness a bit and it looks much better and cleaner than what's there today.

hereforphone · 4 years ago
I don't know what it is that makes Microsoft so inelegant. Not only what you've said, but their APIs / programming environment in general is ugly and (I presume) unwieldy. Their apps (I just switched to Excel / OneNote / etc. from Google) have bugs that don't exist with competitors. The other day I couldn't use OneNote because my Internet went down (?!). Same for Excel, it doesn't reload immediately upon reconnect like Google Sheets did.

I don't get Microsoft. They're huge. They hire a lot of people. Their products are kludges.

leetcrew · 4 years ago
backwards compatibility is a major reason why their APIs are so ugly. I always assumed that was a core company value. ironic that they turn around and break links to their own docs.
SavantIdiot · 4 years ago
There are over a thousand redirects in an Apache config file for a company I contracted with. The website was 20 when I worked there, it is now 26 years and AFAIK they still stick to this principle. And it's still a creaky old LAMP stack. It can be done, but only if this equation holds:

  URL indexing discipline > number of site URLs
(There was no CMS, every page was hand-written PHP. And to be frank, maintenance was FAR simpler than the SPA frameworks I work with today.)

grumbel · 4 years ago
So what happen to that URN discussion? It has been 20 years. Have there been any results I can actually use on the Web today? I am aware that BitTorrent, Freenet and IPFS use hash based URIs, though none of them are really part of the actual Web. There is also rfc6920, but I don't think I have ever seen that one in the wild.

Hashes aside, allowing linking to a book by it's ISBN doesn't seem to exist either as far as I am aware, at least not without using Wikipedia's or books.google.com's services.

spc476 · 4 years ago
Twenty years on, and I can still link to any item at Amazon as long as I have it's ASIN, and using the template:

    https://www.amazon.com/exec/obidos/ASIN/<asin id>
Say what you will about Amazon (and Jeff Bezos), but I don't think they've broken a URL to any product of theirs ever.

Causality1 · 4 years ago
Not broken perhaps, but I regularly click a link to a product and get a page about a totally different product.
paleogizmo · 4 years ago
IEEE Xplore at least uses DOIs for research papers. Don't know if anyone else does, though.
pmyteh · 4 years ago
Everyone uses DOIs for research papers, and https://doi.org/<DOI> will take you there. In fact, I think the URI form is now the preferred way of printing DOIs.
dredmorbius · 4 years ago
Cool rules of thumb don't run contrary to human behaviour and/or rules of nature.

If what you want is a library and a persistent namespace, you'll need to create institutions which enforce those. Collective behaviour on its own won't deliver, and chastisement won't help.

(I'd fought this fight for a few decades. I was wrong. I admit it.)

derefr · 4 years ago
People can know what good behaviour is, and not do good; that doesn't mean it isn't helpful to disseminate (widely-agreed-upon!) ideas about what is good. The point is to give the people who want to do good, the information they need in order to do good.

It's all just the Golden Rule in the end; but the Golden Rule needs an accompaniment of knowledge about what struggles people tend to encounter in the world—what invisible problems you might be introducing for others, that you won't notice because they haven't happened to you yet.

"Clicking on links to stuff you needed only to find them broken" is one such struggle; and so "not breaking your own URLs, such that, under the veil of ignorance, you might encounter fewer broken links in the world" is one such corollary to the Golden Rule.

dredmorbius · 4 years ago
In this case ... it's all but certainly a losing battle.

Keep in mind that when this was written, the Web had been in general release for about 7 years. The rant itself was a response to the emergent phenomenon that URIs were not static and unchanging. The Web as a whole was a small fraction of its present size --- the online population was (roughly) 100x smaller, and it looks as if the number of Internet domains has grown by about the same (1.3 million ~1997 vs. > 140 million in 2019Q3, growing by about 1.5 million per year). The total number of websites in 2021 depends on what and how you count, but is around 200 million active and 1.7 billion total.

https://www.nic.funet.fi/index/FUNET/history/internet/en/kas...

https://makeawebsitehub.com/how-many-domains-are-there/

https://websitesetup.org/news/how-many-websites-are-there/

And we've got thirty years of experience telling us that the mean life of a URL is on the order of months, not decades.

If your goal is stable and preserved URLs and references, you're gonna need another plan, 'coz this one? It ain't workin' sunshine.

What's good, in this case, is to provide a mechanism for archival, preferably multiple, and a means of searching that archive to find specific content of interest.

serverholic · 4 years ago
Collective behavior can work if it’s incentivized.
dredmorbius · 4 years ago
Not where alternative incentives are stronger.

Preservation for infinity is competing with current imperatives. The future virtually always loses that fight.

greyface- · 4 years ago
June 17, 2021, 309 points, 140 comments https://news.ycombinator.com/item?id=27537840

July 17, 2020, 387 points, 156 comments https://news.ycombinator.com/item?id=23865484

May 17, 2016, 297 points, 122 comments https://news.ycombinator.com/item?id=11712449

June 25, 2012, 187 points, 84 comments https://news.ycombinator.com/item?id=4154927

April 28, 2011, 115 points, 26 comments https://news.ycombinator.com/item?id=2492566

April 28, 2008, 33 points, 9 comments https://news.ycombinator.com/item?id=175199

(and a few more that didn't take off)

emmanueloga_ · 4 years ago
I know it seems to be part of HN culture to make these lists, but not sure why. There's a "past" link with every story that provide a comprehensive search for anyone that is interested in whatever past discussions :-/
dredmorbius · 4 years ago
Immediacy and curation have value.

Note that dang will post these as well. He's got an automated tool to generate the lists, which ... would be nice to share if it's shareable.

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

Deleted Comment

amenghra · 4 years ago

    After the creation date, putting any information in the name is asking for trouble one way or another.
Clearly these suggestions predate SEO.

mro_name · 4 years ago
and postdate :-)
tingletech · 4 years ago
that URL changed; it used to start `http:`-- now it starts `https:` -- not cool!
detaro · 4 years ago
The HTTP url works fine still, it sends you to the right place.
laristine · 4 years ago
Not exactly though, it only redirects you to the HTTPS version if it was set up that way. Otherwise, it will show a broken page.
nicbou · 4 years ago
This is a big problem for me. I cite sources on my website and people frequently use them, but the German government seems hell-bent on rotating their URL scheme at least once a year for no reason. URLs to pages that still exist keep changing. I struggle to refer to anything on their websites.