AOL 1.0–2.0 (1994–1995) used the AOLPress engine which was static with no programmable objects.
The ability to interact with the DOM began with "Legacy DOM" (Level 0) in Netscape 2.0 (Sept 1995), IE 3.0 (Aug 1996), AOL 3.0 (1996, via integrated IE engine), and Opera 3.0 (1997). Then there was an intermediate phase in 1997 where Netscape 4.0 (document.layers) and IE 4.0 (document.all) each used their own model.
The first universal standard was the W3C DOM Level 1 Recommendation (Oct 1998). Major browsers adopted this slowly: IE 5.0 (Mar 1999) offered partial support, while Konqueror 2.0 (Oct 2000) and Netscape 6.0 (Nov 2000) were the first W3C-compliant engines (KHTML and Gecko).
Safari 1.0 (2003), Firefox 1.0 (2004), and Chrome 1.0 (2008) launched with native standard DOM support from version 1.0.
Currently most major browser engines follow the WHATWG DOM Living Standard to supports real-time feature implementation.
The last time I checked, Dillo also has no DOM in any reasonable definition of the term; instead it directly interprets the textual HTML when rendering, which explains why it uses an extremely small amount of RAM.
> Would be writing something like "DOM in the modern browsers" more correct then?
No, I don't think so. I don't know why the GP comment is at the top beyond historical interest. If you continue with your plans mentioned elsewhere to cover things like layout, rendering, scripting, etc, under this standard almost everything will have to have the "in modern browsers" added to it.
Part of the problem is the term "DOM" is overloaded. Fundamentally it's an API, so in that sense it only has meaning for a browser to "have a DOM" if it supports scripting that can use that API. And, in fact, all browsers that ever shipped with scripting have had some form of a DOM API (going back to the retroactively named DOM Level 0). That makes sense, because what's the point of scripting if it can't interact with page contents in some way?
So, "Lynx remains a non-DOM browser by design" is true, but only in the sense that it's not scripted at all, so of course it doesn't have DOM APIs, the same way it remains a non-canvas browser and a non-webworker browser. There's no javascript to use those things (it's a non-cssanimation browser too).
There's a looser sense of the "DOM", though, that refers to how HTML parsers turn an HTML text document into the tree structure that will then be interpreted for layout, rendering, etc.
The HTML spec[1] uses this language ("User agents must use the parsing rules described in this section to generate the DOM trees from text/html resources"), but notes it's for parsing specification convenience to act as if you'll end up with a DOM tree at the end of parsing, even if you don't actually use it as a DOM tree ("Implementations that do not support scripting do not have to actually create a DOM Document object, but the DOM tree in such cases is still used as the model for the rest of the specification.")
In that broader sense, all browsers, even non-modern ones (and Lynx) "have a DOM", since they're all parsing a text resource and turning it into some data structure that will be used for layout and rendering, even if it's the very simple layouts of the first browsers, or the subset of layout that browsers like Lynx support.
I wouldn't do anything to "correct" your guide - I think it is "correct" as is. This comment is great for its informational content but I'd consider it an addendum, not an erratum.
If you like it might be nice to include a section on historical and/or niche browsers that lack some of the elements this guide describes - like e.g. Dillo which is a modern browser that supports HTML4 & doesn't support Javascript. But your guide should (imho) centrally focus on the common expectation of how popular browsers work.
This is pretty relelevant to a project I'm working on - a new web browser not based on Chromium or Firefox.
Web browsers are extremely complex, requiring millions of lines of code in order to deal with a huge variety of Internet standards (and not just the basic ones such as HTML, JavaScript and CSS).
A while ago I wanted to see how much of this AI could get done autonomously (or with a human in the loop), you can see a ten-minute demo I posted a couple of days ago:
It's only around 2,000 LOC so it doesn't have a lot of functionality, but it is able to make POST requests and can read some Wikipedia articles, for example. Try it out. It's very slow, unfortunately.
Took a quick glance through the code, its a pretty decent basic go at it.
i can see a few reasons for slowness - you arent using multiprocessing or threading, you might have to rework your rendering for it though. You will need to have the renderer running in a loop, re-rendering when the stack changes, and the multiprocessing/thread loop adjusting the stack as their requests finish.
Second, id recommend taking a look at existing python dom processing modules, this will allow you to use existing code and extend it to fit with your browser, you wont have to deal with finding all the ridiculous parsing edgecases. This may also speed things up a bit.
Id also recommend trying to render broken sites (save a copy, break it, see what your browser does), for the sake of completion
thank you for your quick code review and for these many helpful tips! I'll take a look at them and see what I can put into practice.
EDIT: Unfortunately, it seems that the code is getting near the limit of the context window for Claude, so I'm not able to add several of the feature suggestions you added with the present approach. I'll look into breaking it up into multiple smaller files and see if I can do any better.
Cool project, thanks for sharing. HN readers should also check out https://hpbn.co (High-Performance Browser Networking) and https://every-layout.dev (amazing CSS resource; the paid content is worth it, but the free parts are excellent on their own).
HPBN is really well written, chapter 4 helped me understand TLS enough to debug a high latency issue at a previous job. There was an issue where a particularly incomplete TLS frame received and no subsequent bits for it led to a server waiting 30 min for the rest of the bits to arrive. HPBN was a huge help. I haven’t finished reading it but I remember there’s part of it that goes over the trade offs of increasing vs decreasing TLS frame sizes which is a low level knob I now know exists because of HPBN. Not sure if I’ll ever use it but it’s fascinating.
The step I am missing is how other resources (images, style sheets, scripts) are being loaded based on the HTML/DOM. I find that crucial for understanding why images sometimes go missing or why pages sometimes appear without styling.
Bit unfortunate that more than half of the page is dedicated to network requests, but almost all work and complexity of the browser is in the parsing and rendering pipeline.
Will cover the rendering engine in more details. I didn't know at what sections to go deeper. So just stopped and published it to gather more feedback.
I love the "mental model" approach here. Most guides I've seen either get bogged down in the minute details of TLS/Handshakes immediately or are way too high-level. The interactive packet visualization is a really nice touch to bridge that gap. Thanks for sharing!
Early browsers without DOMs (with initial release date): WorldWideWeb (Nexus) (Dec 1990), Erwise (Apr 1992), ViolaWWW (May 1992), Lynx (1992), NCSA Mosaic 1.0 (Apr 1993), Netscape 1.0 (Dec 1994), and IE 1.0 (Aug 1995).
Note: Lynx remains a non-DOM browser by design.
AOL 1.0–2.0 (1994–1995) used the AOLPress engine which was static with no programmable objects.
The ability to interact with the DOM began with "Legacy DOM" (Level 0) in Netscape 2.0 (Sept 1995), IE 3.0 (Aug 1996), AOL 3.0 (1996, via integrated IE engine), and Opera 3.0 (1997). Then there was an intermediate phase in 1997 where Netscape 4.0 (document.layers) and IE 4.0 (document.all) each used their own model.
The first universal standard was the W3C DOM Level 1 Recommendation (Oct 1998). Major browsers adopted this slowly: IE 5.0 (Mar 1999) offered partial support, while Konqueror 2.0 (Oct 2000) and Netscape 6.0 (Nov 2000) were the first W3C-compliant engines (KHTML and Gecko).
Safari 1.0 (2003), Firefox 1.0 (2004), and Chrome 1.0 (2008) launched with native standard DOM support from version 1.0.
Currently most major browser engines follow the WHATWG DOM Living Standard to supports real-time feature implementation.
No, I don't think so. I don't know why the GP comment is at the top beyond historical interest. If you continue with your plans mentioned elsewhere to cover things like layout, rendering, scripting, etc, under this standard almost everything will have to have the "in modern browsers" added to it.
Part of the problem is the term "DOM" is overloaded. Fundamentally it's an API, so in that sense it only has meaning for a browser to "have a DOM" if it supports scripting that can use that API. And, in fact, all browsers that ever shipped with scripting have had some form of a DOM API (going back to the retroactively named DOM Level 0). That makes sense, because what's the point of scripting if it can't interact with page contents in some way?
So, "Lynx remains a non-DOM browser by design" is true, but only in the sense that it's not scripted at all, so of course it doesn't have DOM APIs, the same way it remains a non-canvas browser and a non-webworker browser. There's no javascript to use those things (it's a non-cssanimation browser too).
There's a looser sense of the "DOM", though, that refers to how HTML parsers turn an HTML text document into the tree structure that will then be interpreted for layout, rendering, etc.
The HTML spec[1] uses this language ("User agents must use the parsing rules described in this section to generate the DOM trees from text/html resources"), but notes it's for parsing specification convenience to act as if you'll end up with a DOM tree at the end of parsing, even if you don't actually use it as a DOM tree ("Implementations that do not support scripting do not have to actually create a DOM Document object, but the DOM tree in such cases is still used as the model for the rest of the specification.")
In that broader sense, all browsers, even non-modern ones (and Lynx) "have a DOM", since they're all parsing a text resource and turning it into some data structure that will be used for layout and rendering, even if it's the very simple layouts of the first browsers, or the subset of layout that browsers like Lynx support.
[1] https://html.spec.whatwg.org/multipage/parsing.html
If you like it might be nice to include a section on historical and/or niche browsers that lack some of the elements this guide describes - like e.g. Dillo which is a modern browser that supports HTML4 & doesn't support Javascript. But your guide should (imho) centrally focus on the common expectation of how popular browsers work.
Web browsers are extremely complex, requiring millions of lines of code in order to deal with a huge variety of Internet standards (and not just the basic ones such as HTML, JavaScript and CSS).
A while ago I wanted to see how much of this AI could get done autonomously (or with a human in the loop), you can see a ten-minute demo I posted a couple of days ago:
https://www.youtube.com/watch?v=4xdIMmrLMLo&t=42s
The source code for this is available here right now:
http://taonexus.com/publicfiles/jan2026/160toy-browser.py.tx...
It's only around 2,000 LOC so it doesn't have a lot of functionality, but it is able to make POST requests and can read some Wikipedia articles, for example. Try it out. It's very slow, unfortunately.
Let me know if you have anything you'd like to improve about it. There's also a feature requests page here: https://pollunit.com/en/polls/ahysed74t8gaktvqno100g
i can see a few reasons for slowness - you arent using multiprocessing or threading, you might have to rework your rendering for it though. You will need to have the renderer running in a loop, re-rendering when the stack changes, and the multiprocessing/thread loop adjusting the stack as their requests finish.
Second, id recommend taking a look at existing python dom processing modules, this will allow you to use existing code and extend it to fit with your browser, you wont have to deal with finding all the ridiculous parsing edgecases. This may also speed things up a bit.
Id also recommend trying to render broken sites (save a copy, break it, see what your browser does), for the sake of completion
EDIT: Unfortunately, it seems that the code is getting near the limit of the context window for Claude, so I'm not able to add several of the feature suggestions you added with the present approach. I'll look into breaking it up into multiple smaller files and see if I can do any better.
https://grail.sourceforge.net/
Deleted Comment
I'm wondering if examples with Browser/Server could benefit from a small visual, e.g. a desktop/laptop icon on one side and a server on the other.
Thank you! It is a good suggestion. Let me think about it.
The step I am missing is how other resources (images, style sheets, scripts) are being loaded based on the HTML/DOM. I find that crucial for understanding why images sometimes go missing or why pages sometimes appear without styling.
Thank you!
Thank you!