With the rise of these retro-looking websites, I feel it's possible again to start using a browser from the '90s. Someone should make a static-site social media platform for full compatibility.
Not so much. While a lot of these websites use classic approaches (handcrafted HTML/CSS, server-side includes, etc.) and aesthetics, the actual versions of those technologies used are often rather modern. For example, TFA looks like a page I'd have browsed in IE5 as a kid, but if you look at the markup, it's using HTML5 tags and Flexbox (which became a W3C WR in 2017), while a period site would have used an HTML table to get the same effect. Of course, you wouldn't want to do it that way nowadays, because it wouldn't be responsive or mobile-friendly.
(I don't think this detracts from such sites, to be clear; they're adopting new technologies where they provide practical benefits to the reader because many indieweb proponents are pushing it as a progressive, rather than reactionary, praxis.)
A couple years ago I made this https://bootstra386.com/ ... it's for a project. This is genuinely 1994 style with 1994 code that will load on 1994 browsers. It doesn't force SSL, this does work. I made sure of it.
The CSS on the page is only to make modern browsers behave like old ones in order to match the rendering.
The guestbook has some javascript if you notice to defeat spam: https://bootstra386.com/guestbook.html but it's the kind of javascript that netscape 2.0 can run without issue.
> For example, TFA looks like a page I'd have browsed in IE5 as a kid, but if you look at the markup, it's using HTML5 tags and Flexbox (which became a W3C WR in 2017), while a period site would have used an HTML table to get the same effect.
Are they going out of their way to recreate an aesthetic that was originally the easiest thing to create given the language specs of the past, or is there something about this look and feel that is so fundamental to the idea of making websites that basically anything that looks like any era or variety of HTML will converge on it?
This is totally doable! It can be done with static sites + rss (and optionally email).
For example, I do this with my website. I receive comments via email (with the sender’s addresses hashed). Each page/comment-list/comment has its own rss feed that people can “subscribe” to. This allows you to get notified when someone responds to a comment you left, or comments on a page. But all notifications are opt-in and require no login because your rss reader is fetching the updates.
Since I’m the moderator of my site, I subscribe to the “all-comments” feed and get notified upon every submission. I then go review the comment and then the site rebuilds. There’s no logins or sign ups. Commenting is just pushing and notifications just pulling.
I plan on open sourcing the commenting aspect of this (it’s called https://r3ply.com) so this doesn’t have to be reinvented for each website, but comments are just one part of the whole system:
The web is the platform. RSS provides notifications (pull). Emailing provides a way to post (push) - and moderate - content. Links are for sharing and are always static (never change or break).
The one missing thing is like a “pending comments” cache, for when you occasionally get HN like traffic and need comments to be temporarily displayed immediately. I’m building this now but it’s really optional and would be the only thing in this system that even requires JS or SSR.
It does not work for people who are using web interface of e-mail only. It would be nice to provide textual instructions (sent this subject to this e-mail) instead of mailto links only.
Your comment system is fantastic. Looking for something like this literally for decades. Hope you will open source it soon. I would like to use it with my blog.
I loaded up Windows 98SE SP2 in a VM and tried to use it to browse the modern web but it was basically impossible since it only supported HTTP/1.1 websites. I was only able to find maybe 3-4 websites that still supported it and load.
If your definition of social-media includes link aggregators, check https://brutalinks.tech. I've been working on things adjacent to that for quite a while now and I'm always looking for interested people.
The biggest issue there is that regardless of how your old your html elements, the old browsers only supported SSL 2/3, at best, and likely nothing at all, meaning you can't connect to basically any website.
(For the youth, this is basically what Yahoo was, originally; it was _ten years_ after Yahoo started before it had its own crawler-based search engine, though it did use various third parties after the first few years.)
(I recall too that when Yahoo did add their own web crawler, all web devs did was add "Pamela Anderson" a thousand times in as meta tags in order to get their pages ranked higher. Early SEO.)
This is cute, but I absolutely do not care about buying a omg.lol URL for $20/yr, and I'm not trying to be a hater because the concept is fine, but anybody who falls into this same boat should know this is explicitly "not for them"
While I'm usually one of those who complain about subscription services, $20 per year is not considerably more than registering a .com with the whois protection. Given that you get a registered, valid domain name that you have control over, it's not a bad deal. Also, it does help filter out low effort spam, especially if they decided to add a limit to allow only n registrations per a credit card should it become a problem.
We're always discussing something along "if you're not paying for it, you're the product" in the context of social media, yet now we're presented a solution and criticize that it's not free.
You can also roll your own webring/directory for free on your ISP's guest area (if they still offer that) and there's no significant network effect to url.town yet that would make you miss out if you don't pay.
I hadn't realised that this was tied to omg.lol until your comment but now I'm confused. If it's from the omg.lol community, how come the address isn't something like url.omg.lol? (ie. it's a community around a domain, why isn't that doimain used here?)
Having studied, and attempted to build, a few taxonomies / information hierarchies myself (a fraught endeavour, perhaps information is not in fact hierarchical? (Blasphemy!!!)), I'm wondering how stable the present organisational schema will prove, and how future migrations might be handled.
Unexpectedly related to the problem of perfect classification is McGilchrist’s The Master and His Emissary. It shows that human mind is a duet where each part exhibits a different mode of attending to reality: one seeks patterns and classifies, while the other experiences reality as indivisible whole. The former is impossible to do “correctly”[0]; the latter is impossible to communicate.
(As a bit of meta, one would notice how in making this argument it itself has to use the classifying approach, but that does not defeat the point and is rather more of a pre-requisite for communicating it.)
Notably, the classifying mode was shown in other animals (as this is common to probably every creature with two eyes and a brain) to engage when seeking food or interacting with friendly creatures. This highlights its ultimate purposes—consumption and communication, not truth.
In a healthy human both parts act in tandem by selectively inhibiting each other; I believe in later sections he goes a bit into the dangers of over-prioritizing exclusively the classifying part all the time.
Due to the unattainability of comprehensive and lossless classification, presenting information in ways that allows for coexistence of different competing taxonomies (e.g., tagging) is perhaps a worthy compromise: it still serves the communication requirement, but without locking into a local optimum.
[0] I don’t recall off the top of my head exactly how Iain gets there (there is plenty of material), but similar arguments were made elsewhere—e.g., Clay Shirky’s points about the inherent lossiness of any ontology and the impossible requirement to be capable of mind reading and fortune telling, or I personally would extrapolate a point from the incompleteness theorem: we cannot pick apart and formally classify a system which we ourselves are part of in a way that is complete and provably correct.
Yes, the seeming hierarchy in information is bit shallow. Yahoo, Altavista and others tried this and it became unmanageable soon. Google realized that keywords and page-raking is the way to go. I think keywords are sort of same as a dimensions in multi-dimensional embeddings.
Information, is basically is about relating something to other known things. A closer relation is being interpreted as location proximity in a taxonomy space.
The US Library of Congress is an interesting case study to my mind. The original classification scheme came from Thomas Jefferson's private library (he donated the collection to the US Government after the original Library of Congress was burned in 1812. The classification has been made more detailed (though so far as I know the original 20 alphabetic top-level classes remain as Jefferson established them), and there's been considerable re-adjustment, as knowledge, mores, and the world around us have changed. The classification has its warts, but it's also very much a living process, something I feel is greatly underappreciated.
At the same time, the Library also has its equivalent of keywords, the Library of Congress Subject Headings. Whilst a book or work will have one and only one Classification assigned to it (the Classification serving essentially as an index and retrieval key), there may be multiple Subject Headings given (though typically only a few, say 3--6 for a given work). These are used to cross-reference works within the subject index.
The Subject Headings themselves date to 1898, and there is in fact an article on the ... er ... subject, "The LCSH Century: A Brief History of the Library of Congress Subject Headings, and Introduction to the Centennial Essays" (2009), I'm just learning as I write this comment:
Nice website. But do I need to buy a omg.lol subdomain before I can contribute links here? Why is it an omg.lol subdomain? I'm happy to buy a new domain, but not so happy about buying a subdomain. I'm not sure why I'd be paying omg.lol to contribute links to url.town? What's the connection between the two?
(I don't think this detracts from such sites, to be clear; they're adopting new technologies where they provide practical benefits to the reader because many indieweb proponents are pushing it as a progressive, rather than reactionary, praxis.)
The CSS on the page is only to make modern browsers behave like old ones in order to match the rendering.
The guestbook has some javascript if you notice to defeat spam: https://bootstra386.com/guestbook.html but it's the kind of javascript that netscape 2.0 can run without issue.
Are they going out of their way to recreate an aesthetic that was originally the easiest thing to create given the language specs of the past, or is there something about this look and feel that is so fundamental to the idea of making websites that basically anything that looks like any era or variety of HTML will converge on it?
For example, I do this with my website. I receive comments via email (with the sender’s addresses hashed). Each page/comment-list/comment has its own rss feed that people can “subscribe” to. This allows you to get notified when someone responds to a comment you left, or comments on a page. But all notifications are opt-in and require no login because your rss reader is fetching the updates.
Since I’m the moderator of my site, I subscribe to the “all-comments” feed and get notified upon every submission. I then go review the comment and then the site rebuilds. There’s no logins or sign ups. Commenting is just pushing and notifications just pulling.
example https://spenc.es/updates/posts/4513EBDF/
I plan on open sourcing the commenting aspect of this (it’s called https://r3ply.com) so this doesn’t have to be reinvented for each website, but comments are just one part of the whole system:
The web is the platform. RSS provides notifications (pull). Emailing provides a way to post (push) - and moderate - content. Links are for sharing and are always static (never change or break).
The one missing thing is like a “pending comments” cache, for when you occasionally get HN like traffic and need comments to be temporarily displayed immediately. I’m building this now but it’s really optional and would be the only thing in this system that even requires JS or SSR.
I like your thinking. Beautiful website, by the way!
https://portal.mozz.us/gopher/gopher.somnolescent.net/9/w2kr...
with these NEW values in about:config set to true:
Also, set these to false:Isn't that https://subreply.com/ ?
What do you mean by that? Especially the "social" part?
Deleted Comment
(For the youth, this is basically what Yahoo was, originally; it was _ten years_ after Yahoo started before it had its own crawler-based search engine, though it did use various third parties after the first few years.)
(I recall too that when Yahoo did add their own web crawler, all web devs did was add "Pamela Anderson" a thousand times in as meta tags in order to get their pages ranked higher. Early SEO.)
2010 archive of dmoz: https://web.archive.org/web/20100227212554/http://www.dmoz.o...
We're always discussing something along "if you're not paying for it, you're the product" in the context of social media, yet now we're presented a solution and criticize that it's not free.
You can also roll your own webring/directory for free on your ISP's guest area (if they still offer that) and there's no significant network effect to url.town yet that would make you miss out if you don't pay.
What is (was) it? I can't find anything with a search (too many unrelated results).
X is just one cappuccino, Y is just 3.5 bagels, Z costs not more than a pint, A costs almost as much as a nice meal … and so on. God's sake! :)
Dead Comment
Dead Comment
Dead Comment
Dead Comment
Dead Comment
Deleted Comment
(Whether for this or comparable projects.)
<https://en.wikipedia.org/wiki/Taxonomy>
<https://en.wikipedia.org/wiki/Library_classification>
https://web.archive.org/web/20191117161738/http://shirky.com...
(As a bit of meta, one would notice how in making this argument it itself has to use the classifying approach, but that does not defeat the point and is rather more of a pre-requisite for communicating it.)
Notably, the classifying mode was shown in other animals (as this is common to probably every creature with two eyes and a brain) to engage when seeking food or interacting with friendly creatures. This highlights its ultimate purposes—consumption and communication, not truth.
In a healthy human both parts act in tandem by selectively inhibiting each other; I believe in later sections he goes a bit into the dangers of over-prioritizing exclusively the classifying part all the time.
Due to the unattainability of comprehensive and lossless classification, presenting information in ways that allows for coexistence of different competing taxonomies (e.g., tagging) is perhaps a worthy compromise: it still serves the communication requirement, but without locking into a local optimum.
[0] I don’t recall off the top of my head exactly how Iain gets there (there is plenty of material), but similar arguments were made elsewhere—e.g., Clay Shirky’s points about the inherent lossiness of any ontology and the impossible requirement to be capable of mind reading and fortune telling, or I personally would extrapolate a point from the incompleteness theorem: we cannot pick apart and formally classify a system which we ourselves are part of in a way that is complete and provably correct.
Information, is basically is about relating something to other known things. A closer relation is being interpreted as location proximity in a taxonomy space.
The US Library of Congress is an interesting case study to my mind. The original classification scheme came from Thomas Jefferson's private library (he donated the collection to the US Government after the original Library of Congress was burned in 1812. The classification has been made more detailed (though so far as I know the original 20 alphabetic top-level classes remain as Jefferson established them), and there's been considerable re-adjustment, as knowledge, mores, and the world around us have changed. The classification has its warts, but it's also very much a living process, something I feel is greatly underappreciated.
At the same time, the Library also has its equivalent of keywords, the Library of Congress Subject Headings. Whilst a book or work will have one and only one Classification assigned to it (the Classification serving essentially as an index and retrieval key), there may be multiple Subject Headings given (though typically only a few, say 3--6 for a given work). These are used to cross-reference works within the subject index.
The Subject Headings themselves date to 1898, and there is in fact an article on the ... er ... subject, "The LCSH Century: A Brief History of the Library of Congress Subject Headings, and Introduction to the Centennial Essays" (2009), I'm just learning as I write this comment:
<https://www.tandfonline.com/doi/abs/10.1300/J104v29n01_01>
Anyone with an account already that wants to take requests for URLs to add?
(Hey, charge $1 a request and you should be able to break even on your $20 domain purchase before the day is up.)
I'll take requests, but I don't guarantee I'll add just anything.