>However, due to the caching refresh cycle, CMS updates were taking many seconds to show up on the website. The issue made it problematic for content editors to preview their modifications and got progressively worse as the amount of content grew, resulting in delays lasting tens of seconds.
This seems superficial, why not have a local-only CMS site to preview changes for the fast feedback loop and then you only have to dump the text to prod?
>got progressively worse as the amount of content grew, resulting in delays lasting tens of seconds.
This is like the only legit concern to justify redoing this, but even then, it was still only taking seconds to a minute.
The common solution is to spin up a dedicated DNS hostname called something like "preview.www.netflix.com" and turn off all caching when users go via that path. Editors and reviewers use that, and that's... it. Solved!
But I need to build a beautiful system with global scale!!
They inflated the problem of “make content preview faster” for a small number of users to “make a fast global cache system”. That’s promotion material for you
A solution as simple as this is not easy to miss, on other side, to be fair its hard to know what other considerations were involved in the review and design process. Some one had to present a reasonable rationale to go in a certain direction.
Was just about to say this. There are many local first, open source CMSs and so the cost to customize them (or just build a plugin) to edit locally and publish remotely would be way less that this infra. What am I missing?
The most incredible thing is someone thought it was a good idea to write an engineering blog post about a team who so screwed up the spec for a simple static website that they needed 20 engineers and exotic tech to build it.
It can be ok to admit you did something stupid, but don't brag about it like it's an accomplishment.
Concerning that their total uncompressed data size including full history is only 520MB and they built this complex distributed system rather than, say, rsyncing an sqlite database. It sounds like only Netflix staff write to the database.
I don't know any of the details, but they seem to have moved a lot of their internal stuff to Hollow.
So maybe it's just an attempt at unification of the tech stack, rather than a concrete need. But Hollow itself is definitely heavily used within, e.g. see its whitepaper.
Having dealt with similar architectures, I have a hypothesis on how this library (Hollow) emerged and evolved.
In the beginning there was a need for a low-latency Java in-process database (or near cache).
Soon intolerable GC pauses pushed them off the Java heap.
Next they saw the memory consumption balloon: the object graph is gone and each object has to hold copies of referenced objects, all those Strings, Dates etc.
Then the authors came up with ordinals, which are... I mean why not call them pointers? (https://hollow.how/advanced-topics/#in-memory-data-layout)
That is wild speculation ofc. And I don't mean to belittle the authors' effort: the really complex part is making the whole thing perform outside of simple use cases.
I don’t normally comment on technical complexity at scale, but having used Tudum, it is truly mind boggling why they need this level of complexity and architecture for what is essentially a WP blog equivalent.
Am I naive thinking this infra is overblown for a read-only content website?
As much as this website could be very trafficked I have the feeling they are overcomplicating their infra, for little gains. Or at least, I wouldn't expect to end having an article about.
Reading the article I got the impression the big challenge is doing "personalzation" of the content at scale.
If it were "just" static pages, served the same to everyone, then it's pretty straightforward to implement even at the >300m users scale Netflix operates at. If you need to serve >300m _different_ pages, each built in real-time with a high-bar p95 SLO then I can see it getting complicated pretty quickly in a way that could conceivably justify this level of "over engineering".
To be honest though, I know very little about this problem beyond my intuition, so someone could tell me the above is super easy!
Whats the point of building & caching static pages if every single user gets their own static page... The number of users who will request each page is 1?
Yup. Content that may be read-only from a user's perspective might get updated by internal services. When those updates don't need to be reflected immediately, a CDN or similar works fine for delivering the content to a user.
When changes to data are important, a library like Hollow can be pretty magical. Your internal data model is always up to date across all your compute instances, and scaling up horizontally doesn't require additional data/infrastructure work.
We were processing a lot of data with different requirements: big data processed by a pipeline - NoSQL, financial/audit/transactional - relational, changes every 24hrs or so but has to be delivered to the browse fast - CDN, low latency - Redis, no latency - Hollow.
Of course there are tradeoffs between keeping a centralized database in memory (Redis) and distributing the data in memory on each instance (Hollow). There could be cases where Hollow hasn't sync'd yet, so the data could be different across compute instances. In our case, it didn't matter for the data we kept in Hollow.
When new people join your team and learn your infrastructure, I bet they often ask ”why is this so complicated? It’s just a <insert simple thing here>.”
And your response is surely “Well of course, that would be nice, but it’s not as simple as that. Here are constraints X Y and Z that make a trivial solution infeasible.”
> And your response is surely “Well of course, that would be nice, but it’s not as simple as that. Here are constraints X Y and Z that make a trivial solution infeasible.”
It's 500 MB of text. A phone could serve that at the scale we're talking about here, which is a PR blog, not netflix.com.
They're struggling with "reading" from a database that is also used for "writes", a terribly difficult situation that no commercial database engine has ever solved in the past.
Meanwhile they have a complex job engine to perform complex tasks such as joining two strings together to form a complete URL instead of just a relative path.
This is pure, unadulterated architecture astronaut arrogance.
PS: This forum, right here, supports editing at a much higher level of concurrency. It also has personalised content that is visible to users with low latency. Etc, etc... All implemented without CQRS, Kafka, and so forth.
This is a static blog site with lists about popular Netflix content.
Yes many times there are very valid reasons for complexity, but my suspicion is that this is the result of 20+ very talented engineers+designers+pms being put in charge of a building a fairly basic static site. Of course you are going to get something overengineered.
> And your response is surely “Well of course, that would be nice, but it’s not as simple as that. Here are constraints X Y and Z that make a trivial solution infeasible.”
Sometimes it’s “It was made many months/years ago in circumstances and with views that no longer seem to apply (possibly by people no longer there). Sadly, the cost of switching to something else would be significant and nobody wants to take on the risk of rewriting it because it works.”
> And your response is surely “Well of course, that would be nice, but it’s not as simple as that. Here are constraints X Y and Z that make a trivial solution infeasible.”
All problems are trivial once you ignore all real-world constraints.
Doing weird pointlessly complicated stuff on a niche area of your website is a not entirely ridiculous way to try out new things and build new skills I guess.
> As much as this website could be very trafficked I have the feeling they are overcomplicating their infra, for little gains.
This sort of personal opinion reads like a cliche in software development circles: some rando casualy does a drive-by system analysis, cares nothing about requirements or constraints, and proceeds to apply simplistic judgement in broad strokes.
And this is then used as a foundation to go on a rant regarding complexity.
This adds nothing of value to any conceivable discussion.
characterizing netflix as a "read-only" website is incredibly shortsighted. you have:
- a constantly changing library across constantly changing licensing regions available in constantly changing languages
- collaborative filtering with highly personalized recommendation lists, some of which you just know has gotta be hand-tuned by interns for hyper-demographic-specific region splits
- the incredible amounts of logistics and layers upon layers of caching to minimize centralized bandwidth to serve that content across wildly different network profiles
i think that even the single-user case has mind boggling complexity, even if most of it boils down to personalization and infra logistics.
I remember being interested in their architecture when I attended re:Invent in 2018. I went to four separate Netflix talks given by four separate people with wildly different titles, teams and responsibilities. The talks had different titles indicating a variety of topics covered. Two of these talks weren't even obviously/outwardly Netflix-focused from the description -- they were just talks supposedly covering something I was curious about.
All four speakers ran the exact same slide deck with a different intro slide.
All four speakers claimed the same responsibility for the same part of the architecture.
I was livid. I also stopped attending talks in person entirely because of this, outside of smaller more focused events.
Not naive but perhaps missing that army of enterprise Java developers that Netflix employ do need to justify their large salaries by creating complex architecture to handler future needs.
From an outsiders perspective Tudum does seem to be an extremely simple site... But maybe they have complicated use cases for it? I'm also not convinced it merits this level of complexity
I’m gonna take a wild guess: the actual problem they’re engineering around is the “cloud” part of the diagram (that the “Page Construction Service” talks to)
There is probably some hilariously convoluted requirement to get traffic routed/internal API access. So this thing has to run in K8s or whatever, and they needed a shim to distribute the WordPress page content to it.
Having to run in k8s doesn't change that much, the description of a whole Cassandra + Kafka stack to deliver the ingestion of articles already says there's a lot more architecture astronaut-ing going on than simply deployment.
I cannot imagine why you'd need a reactive pipeline built on top of Kafka and Cassandra to deliver some fanservice articles through a CMS, perhaps some requirement about international teams needing to deliver tailored content to each market but even with that it seems quite overblown.
In the end it will be a technical solution to an organisational issue, some parts of their infrastructure might be quite rigid and there are teams working around that instead of with that...
Lots of people think microservices = performance gains only.
It’s not. It’s mainly for organizational efficiency. You can’t be blocked from deploying fixes or features. Always be shipping. Without breaking someone else’s code.
> As much as this website could be very trafficked I have the feeling they are overcomplicating their infra,
That is because they are and it seems that since they're making billions and are already profitable, they're less likely to change / optimize anything.*
Netflix is stuck with many Java technologies with all their fundamental issues and limitations. Whenever they need to 'optimize' any bottlenecks, their solution somehow is to continue over-engineering their architecture over the most tiniest offerings (other than their flagship website).
There is little reason for any startup to copy this architectural monstrosity just for attention on their engineering blog post for little to no advantage whatsoever.
* Unless you are not profitable, costs of infra continues to increase or the quality of service is sluggish or it is urgent to do so.
> There is little reason for any startup to copy this architectural monstrosity
This is the only reasonable take in your rant, but the reasoning is off for even this. They have little reason, because they will never hit the scale Netflix operates at. In the very very odd chance they do, they will have ample money to care about it.
Holy shit the amount of overcomplications to serve simple HTML and CSS.
Someone really has to justify their job security to be pulling shit like this, or they really gotta be bored.
If anyone can _legitimately_ justify this, please do, I'd love to hear it.
And don't go "booohooo at scale" because I work at scale and am 100% not sure what is the problem this is solving that can't just be solved with a simpler solution.
Also this isn't "Netflix scale", Tudum is way less popular.
I thought the article was about some internal Netflix piece of infra (well ... it is in some way) but it really is some website for some annual event ... wow.
This has to be one of the most over-engineered websites out there.
> Holy shit the amount of overcomplications to serve simple HTML and CSS.
If you read the overview, Tudum has to support content update events that target individual users and need to be templated.
How do you plan on generating said HTML and CSS?
If you answer something involving a background job, congratulations you're designing Tudum all over again. Now wait for opinionated drive-by critics to criticize your work as overcomplicated and resume-driven development.
Don’t forget the part where people not even a little exposed to the massive tech infrastructure at hand and local skill pool make WAGs about what us and isn’t cheap.
Kafka might seem like extra work compared with not Kafka, but if it’s already setup and running and the entire team is using it elsewhere suddenly it’s free.
I hear "content update events that target individual users and need to be templated" and immediately rule out any approach involving a background job.
- Reddit doesn't use background jobs to render a new home page for every user after every update.
- Facebook doesn't use background jobs to render a new feed for every user after every update.
- Hacker News doesn't use background jobs to render a new feed for every user after every update.
Why? Because we have no guarantee a user will access the site on a particular day, or after a particular content update, so rendering every content update for every user is immediately the wrong approach. It guarantees a lot of work will be thrown away. The sensible way to do this is to render the page on demand, when (and IF) a user requests it.
Doing N*M work where N=<# of users> and M=<# of page updates> sure seems like the wrong approach when just doing N work where N=<# of times a user requests a page> is an option.
There's lots of less exotic approaches that work great for this basic problem:
- Traditional Server-Side Rendering. This approach is so common that basically every language has a framework for this.
- Single-Page Applications. If you have a lot of content that only needs updated sometimes, why not do the templating in the users browser?
- Maybe just use wordpress? It already supports user accounts and customization.
This seems superficial, why not have a local-only CMS site to preview changes for the fast feedback loop and then you only have to dump the text to prod?
>got progressively worse as the amount of content grew, resulting in delays lasting tens of seconds.
This is like the only legit concern to justify redoing this, but even then, it was still only taking seconds to a minute.
They inflated the problem of “make content preview faster” for a small number of users to “make a fast global cache system”. That’s promotion material for you
Deleted Comment
It can be ok to admit you did something stupid, but don't brag about it like it's an accomplishment.
Concerning that their total uncompressed data size including full history is only 520MB and they built this complex distributed system rather than, say, rsyncing an sqlite database. It sounds like only Netflix staff write to the database.
So maybe it's just an attempt at unification of the tech stack, rather than a concrete need. But Hollow itself is definitely heavily used within, e.g. see its whitepaper.
In the beginning there was a need for a low-latency Java in-process database (or near cache). Soon intolerable GC pauses pushed them off the Java heap. Next they saw the memory consumption balloon: the object graph is gone and each object has to hold copies of referenced objects, all those Strings, Dates etc. Then the authors came up with ordinals, which are... I mean why not call them pointers? (https://hollow.how/advanced-topics/#in-memory-data-layout)
That is wild speculation ofc. And I don't mean to belittle the authors' effort: the really complex part is making the whole thing perform outside of simple use cases.
As much as this website could be very trafficked I have the feeling they are overcomplicating their infra, for little gains. Or at least, I wouldn't expect to end having an article about.
If it were "just" static pages, served the same to everyone, then it's pretty straightforward to implement even at the >300m users scale Netflix operates at. If you need to serve >300m _different_ pages, each built in real-time with a high-bar p95 SLO then I can see it getting complicated pretty quickly in a way that could conceivably justify this level of "over engineering".
To be honest though, I know very little about this problem beyond my intuition, so someone could tell me the above is super easy!
They are not talking about the Netflix steaming service.
https://www.netflix.com/tudum
This is a site they are talking about which is very similar to a WordPress powered PR blog.
When changes to data are important, a library like Hollow can be pretty magical. Your internal data model is always up to date across all your compute instances, and scaling up horizontally doesn't require additional data/infrastructure work.
We were processing a lot of data with different requirements: big data processed by a pipeline - NoSQL, financial/audit/transactional - relational, changes every 24hrs or so but has to be delivered to the browse fast - CDN, low latency - Redis, no latency - Hollow.
Of course there are tradeoffs between keeping a centralized database in memory (Redis) and distributing the data in memory on each instance (Hollow). There could be cases where Hollow hasn't sync'd yet, so the data could be different across compute instances. In our case, it didn't matter for the data we kept in Hollow.
Deleted Comment
And your response is surely “Well of course, that would be nice, but it’s not as simple as that. Here are constraints X Y and Z that make a trivial solution infeasible.”
It's 500 MB of text. A phone could serve that at the scale we're talking about here, which is a PR blog, not netflix.com.
They're struggling with "reading" from a database that is also used for "writes", a terribly difficult situation that no commercial database engine has ever solved in the past.
Meanwhile they have a complex job engine to perform complex tasks such as joining two strings together to form a complete URL instead of just a relative path.
This is pure, unadulterated architecture astronaut arrogance.
PS: This forum, right here, supports editing at a much higher level of concurrency. It also has personalised content that is visible to users with low latency. Etc, etc... All implemented without CQRS, Kafka, and so forth.
Yes many times there are very valid reasons for complexity, but my suspicion is that this is the result of 20+ very talented engineers+designers+pms being put in charge of a building a fairly basic static site. Of course you are going to get something overengineered.
Sometimes it’s “It was made many months/years ago in circumstances and with views that no longer seem to apply (possibly by people no longer there). Sadly, the cost of switching to something else would be significant and nobody wants to take on the risk of rewriting it because it works.”
It depends.
All problems are trivial once you ignore all real-world constraints.
This sort of personal opinion reads like a cliche in software development circles: some rando casualy does a drive-by system analysis, cares nothing about requirements or constraints, and proceeds to apply simplistic judgement in broad strokes.
And this is then used as a foundation to go on a rant regarding complexity.
This adds nothing of value to any conceivable discussion.
Dead Comment
- a constantly changing library across constantly changing licensing regions available in constantly changing languages
- collaborative filtering with highly personalized recommendation lists, some of which you just know has gotta be hand-tuned by interns for hyper-demographic-specific region splits
- the incredible amounts of logistics and layers upon layers of caching to minimize centralized bandwidth to serve that content across wildly different network profiles
i think that even the single-user case has mind boggling complexity, even if most of it boils down to personalization and infra logistics.
All four speakers ran the exact same slide deck with a different intro slide. All four speakers claimed the same responsibility for the same part of the architecture.
I was livid. I also stopped attending talks in person entirely because of this, outside of smaller more focused events.
There is probably some hilariously convoluted requirement to get traffic routed/internal API access. So this thing has to run in K8s or whatever, and they needed a shim to distribute the WordPress page content to it.
I cannot imagine why you'd need a reactive pipeline built on top of Kafka and Cassandra to deliver some fanservice articles through a CMS, perhaps some requirement about international teams needing to deliver tailored content to each market but even with that it seems quite overblown.
In the end it will be a technical solution to an organisational issue, some parts of their infrastructure might be quite rigid and there are teams working around that instead of with that...
Lots of people think microservices = performance gains only.
It’s not. It’s mainly for organizational efficiency. You can’t be blocked from deploying fixes or features. Always be shipping. Without breaking someone else’s code.
That is because they are and it seems that since they're making billions and are already profitable, they're less likely to change / optimize anything.*
Netflix is stuck with many Java technologies with all their fundamental issues and limitations. Whenever they need to 'optimize' any bottlenecks, their solution somehow is to continue over-engineering their architecture over the most tiniest offerings (other than their flagship website).
There is little reason for any startup to copy this architectural monstrosity just for attention on their engineering blog post for little to no advantage whatsoever.
* Unless you are not profitable, costs of infra continues to increase or the quality of service is sluggish or it is urgent to do so.
This is the only reasonable take in your rant, but the reasoning is off for even this. They have little reason, because they will never hit the scale Netflix operates at. In the very very odd chance they do, they will have ample money to care about it.
If anyone can _legitimately_ justify this, please do, I'd love to hear it.
And don't go "booohooo at scale" because I work at scale and am 100% not sure what is the problem this is solving that can't just be solved with a simpler solution.
Also this isn't "Netflix scale", Tudum is way less popular.
I hadn't even heard of it until today.
This has to be one of the most over-engineered websites out there.
Deleted Comment
My guess
If you read the overview, Tudum has to support content update events that target individual users and need to be templated.
How do you plan on generating said HTML and CSS?
If you answer something involving a background job, congratulations you're designing Tudum all over again. Now wait for opinionated drive-by critics to criticize your work as overcomplicated and resume-driven development.
Kafka might seem like extra work compared with not Kafka, but if it’s already setup and running and the entire team is using it elsewhere suddenly it’s free.
This isn't an engineering problem. It's a PM+Eng Lead failing to talk to each other problem.
If you need 20 engineers and exotic text for what should be a simple static site or Wordpress site, you're doing it wrong.
- Reddit doesn't use background jobs to render a new home page for every user after every update.
- Facebook doesn't use background jobs to render a new feed for every user after every update.
- Hacker News doesn't use background jobs to render a new feed for every user after every update.
Why? Because we have no guarantee a user will access the site on a particular day, or after a particular content update, so rendering every content update for every user is immediately the wrong approach. It guarantees a lot of work will be thrown away. The sensible way to do this is to render the page on demand, when (and IF) a user requests it.
Doing N*M work where N=<# of users> and M=<# of page updates> sure seems like the wrong approach when just doing N work where N=<# of times a user requests a page> is an option.
There's lots of less exotic approaches that work great for this basic problem:
- Traditional Server-Side Rendering. This approach is so common that basically every language has a framework for this.
- Single-Page Applications. If you have a lot of content that only needs updated sometimes, why not do the templating in the users browser?
- Maybe just use wordpress? It already supports user accounts and customization.
This is not a spectacular problem. This is a "building a web service 101 problem", which was solved about the time CGI scripts came out.
Also why would anyone involve background jobs? You can do high-performance templating real-time, especially on 2025 hardware.
Is this not what SSR html - via any backend language/framework - has been doing since forever?