I wish this discussed the timing arbitration of each move. Based on the packet information (if that is correct & complete) then the timing is done entirely on the clients. However, they show the time in seconds which can't be right so I am curious how accurate this packet schema is (or if those are float values).
Regardless, one thing I find maddening about chess.com is the time architecture of the game. I haven't seen the underlying code, but it feels like the SERVER is tracking the time. This completely neglects transport time & latency meaning that 1s to move isn't really a second. Playing on the mobile client is an exercise in frustration if you are playing timed games and down to the wire. Even when you aren't, your clock will jump on normal moves and it is most obvious during the opening.
This could also be due to general poor network code as well. The number of errors I get during puzzles is also frustrating. Do they really not retry a send automatically?? <breath>
Chess.com has the brand and the names... but dang, the tech feels SO rough to me.
Chess.com software might be the worst public facing software ever assembled.
During their most popular weekly tournament (by the number of spectators) called Titled Tuesday where significant % of the world elite regularly competes they send links on a public chat to a 3rd site every 4 rounds. The reason is that there is a few minutes break and they failed implementing a clock on their side so they need 3rd party service for that.
This is one of the many, many things but imo it's the most telling. They can't even add a clock counting down the 6 minutes to their web client.
I guess my naive frustration comes from crazy fps games tracking things so precisely and yet somehow Chess.com can’t handle a turn based game?! Honestly.
I do recognize that fps games utilize predictive algorithms and planning to estimate future player positions but still, turn based networking with 100ms accuracy should be a solved problem
Netcode dev here. Predicting the clock is a trivially solved problem. The client and server know the latency between each other, the server can offset the timestamp on the input from the client to compensate for this difference, and the client can offset it's rendering of the clock data from the server. The same techniques used in regular online gaming would apply here. The only X factor here is the impact of the client lieing about its latency to the server, perhaps that could have an impact, not sure.
Track the two clients pings? What client side cheating prevention would you need to do in chess? Afaik you can't cheat by clipping through walls or jumping around on the map.
> they show the time in seconds which can't be right
Seems right.
If you export/download games from lichess, they use the .pgn (Portable Game Notation) format, which is a standard plain-text format circa 1993, used by pretty much everyone for describing a chess game.
Lichess follows the specification to the letter, and as it only technically allows one-second accuracy, lichess only record moves with one-second accuracy. It seems insane, but that's how they do it.
Chess.com also exports PGN files, but they add a decimal place, allowing subsecond accuracy. No one has a problem with this. There is no software which cannot handle this. But Lichess refuses to "break" the spec.
"Move times are now stored and displayed with a precision of one tenth of a second. The precision even goes up to one hundredth of a second, for positions where the player had less than 10 seconds left on their clock."
What's interesting about this is chess.com allows you to stack as many pre-moves as you like but they each cost 0.1s, whereas on lichess you can only have one pre-move which is technically free but maybe not because of delay.
The worst part is they call it an intentional choice.
"First off, premoves take 0.1 seconds. That is what has been preferred and agreed upon by most professional players we have consulted on the topic. They prefer .1 to .0 for premove. This is also what other chess servers do."[1]
It's super annoying and the reason I only play blitz+ on chesscom.
That would introduce other issues I think. Since premove are cancellable/changeable, what happen if you changed at the very last moment but due to delay it did not reached the server in time?
I can't play bullet on chess.com for this reason. Lost way too many games on "time" even though I had a second or two on the clock. Incredibly frustrating.
Yes, he had timing problems in an online tournament on chess.com (against a Mexican GM in the same room) where his computer did not have all Windows updates and/or the timezone was wrong.
So essentially lichess chose StackOverflow approach - (rather) beefy servers, instead of "treating them like a cattle".
Interesting that they accumulate and periodically store game state. Unfortunately it is not very clear, where they store ongoing game state - in redis or on server itself. Also cost breakdown doesn't have server for redis, only for DB.
BTW, their github has better architectural picture, than overly simplified one in the article: https://raw.githubusercontent.com/lichess-org/lila/master/pu.... Unfortunately, I'm afraid, drawing something like that during interview may not land a job at faang =(
Note that they have cost per game fairly low: $0.00027, 3,671 games per dollar.
p.s. I'm not saying that Lichess's approach is the best or faang is the worst. Remember, lichess had 10 hours outage exactly because of the architecture chosen (single datacenter dependency). https://lichess.org/@/Lichess/blog/post-mortem-of-our-longes... . And outages like that are exactly the reasons why multi-datacenter and multi-region architectures are drilled down into faang engineers.
My point is is that there are cases when this approach is legit, but typical interview is laser focused on different things, and most probably won't appreciate the "old style" approach to the problem. I'm sure that if Thibault will ever decide to land in faang he will neither do whiteboard coding nor system design.
The downtime here is mostly OVH's fault. They're not known for fast support on hardware failures, that's why they're cheap. If they had this architecture on AWS EC2 and could just spin up a new AMI, then they'd only have a few minutes downtime, and the same simple architecture.
I remember Meta having a few outages of their own. And outlook as well. So I'm not sure what to think now. But sure, on paper FAANG is redundant and hence better.
In my experience, issues scale exponentially with scale. So handling 10x the traffic might mean 100x the potentially issues. Redundancy helps with that so when something inevitably fails, the architecture is able to automatically recover and the end user doesn't see any degradation. So what works for lichess wouldn't work for Meta.
Redid runs on the main server, where lila runs, as indicated in the diagram you linked. And moves are buffered in lila. Redis is only used for pub-sub.
Why feel anything about it at all? You work at FAANG: be glad for the money or quit if there isn't any. You don't work at FAANG: bad hiring makes it easier for you to get hired and make money.
- "While these moves could be calculated client-side, providing them server-side ensures consistency - especially for complex or esoteric chess variants - and optimizes performance on clients with limited processing capabilities or energy restrictions."
Just a wild guess: might be intended to lower the implementation barrier for new open-source software clients on new platforms, and/or preempt them from implementing subtle logic bugs that only show up much later.
The rules of chess are a bit tedious to implement, and you can easily get tired and code an edge-case bug that's almost invisible. Lichess itself did this—it once had a logic error that affected a very tiny number (exactly 7) of games,
For those curious about the illegal move, it seems like it's allowing queen side castling through the king side rook (or vice versa). eg. if this is the first rank, R _ _ R K _ _ _, then you could make the move O-O-O and end up with _ _ _ R K _ _ _
Naturally, it's not possible to view this move anymore, but this game (https://lichess.org/XDQeUk6j#48) has everything up until the last legal move right before the illegal castling happened.
I can see why that only appeared in 7 games. It's pretty rare to see a rook in between a king and another rook that are otherwise legally able to castle. Even rarer for someone to get into that position and actually try to castle.
Also that linked game is pretty entertaining. It's not a good game, but it can be fun watching lower ranked players make moves that you'd never see in higher level games. Like, who plays Bb5+ against the Scandinavian? Amazing stuff.
Another wild guess: Lichess could be pre-calculating and caching the legal moves for the most common chess positions. While pre-calculating every possible legal move for every position would be impossible, you could pre-calculate the most common openings and endgames, which could cover a lot of real-world positions. This cache could easily be larger then practical for the client, but a server could hold onto it no problem. This could save on the net processing time, compared to the client determining all legal moves for every position.
Given that a good chess move generator will work in way less than a microsecond (TBH, probably even less than taking a DRAM lookup for a large hash table), and most chess positions have never been seen before, having a cache sounds counterproductive.
> and/or preempt them from implementing subtle logic bugs that only show up much later.
Validating a submitted move is distinct from listing valid moves. I assumed the server would need to validate regardless of providing a list to the client.
From what I remember, one of the main reason also was to avoid bloating the JS on the game page. That page is kept especially slim to maximize performance and load times for low-powered devices.
Indeed, it does deal with the message loss. I was momentarily confused because in my many thousands of bullet chess games on Lichess I haven't had much of any message loss that can be attributed to Lichess's servers (but plenty when my Internet connection is down or unstable).
I will have to take a look, because whatever it's doing, it works very well!
The at-most-once delivery could be an issue if lichess's backend services (lila or lila-ws) crash. Presumably this a rare enough occurrence that message loss is more of a theoretical concern.
I have no idea, but the in-house pub/sub tech at a previous job used [PGM][1] together with some hand-written brokers and a client library. The overall delivery guarantee is at-most-once, but in over ten years and across tens of thousands of machines in multiple datacenters, they never saw a single dropped message. Not sure how they measured that, but I was told the measurements were accurate.
Well, except for that one major outage where everything shit the bed due to some misconfiguration of IP multicast in the datacenters, or so I was told.
So, maybe if your mission isn't life critical, you can just wrongfully assume exactly-once delivery.
Lichess also compensates for latency to some extent.
To do that, the server needs some measure of “how long does the client think the player actually took to make a move”, to later subtract latency not attributable to actual thinking from the clock.
What do you mean? If you open a web socket connection it should behave like a normal TCP connection. All sent data guaranteed to be delivered complete and in order, unless the connection fails.
how would you protect your websocket server? I am building a game, but when I put the domain behind (free plan) cloudflare, I get latency delay (3x slower) on the players events.
Saw CF had some paying solution, but was wondering about a free solution
I've been managing game servers that get attacked on a daily basis for almost a decade. I've tried Cloudflare a few times (on their business plan) and seen poor results every time.
Cloudflare has a lower latency product called Argo Smart Routing [1]. When we tried Argo in 2020, we still saw 10+ ms increased latency across the board, which is unacceptable for competitive multiplayer games. That said, Discord voice still (or used to) uses Argo for voice, so there are certainly less latency-sensitive games where it would work well.
The other issue with sockets over Cloudflare (circa 2020 on business plan) is they get terminate liberally with the assumption you have a reconnection mechanism in place. I'd imagine this is acceptable for traditional WebSocket use cases, but not for games.
Services like OVH & Vultr also advertise "DDoS protection for games," but I've found these to be pretty useless in practice. We can only measure traffic that reaches our game servers, so I have no way of knowing if they're actually helping at all.
Your best bet is getting familiar with iptables and fine-tuning rules to match your game's traffic patterns. Thankfully, LLMs are pretty good at generating these rules for you nowadays if you're not already familiar with these tools. Make sure to set up something like node-exporter to be able to monitor attacks and understand where things go wrong. There have been a few other posts on HN in the past that go into more depth about game server DoS mitigation [2] [3].
I built something in the same vein for my startup (Apache 2.0 OSS, steal our code!) [4] that runs a series of load balancers in front of game servers in order to act like a mini-Cloudflare. In addition to the basics I already listed, we also have logic under the hood that (a) dynamically routes traffic to load balancers and (b) autoscales hardware based on traffic in order to absorb attacks. We're rolling out a dynamic bot attack & mitigation mechanism soon to handle more complex patterns.
As I understand, the separation between Lila and Lila-ws is primarily for fault isolation rather than independent scaling. Maybe independent scaling becomes useful if websocket overhead exceeds what one machine can handle.
Regardless, one thing I find maddening about chess.com is the time architecture of the game. I haven't seen the underlying code, but it feels like the SERVER is tracking the time. This completely neglects transport time & latency meaning that 1s to move isn't really a second. Playing on the mobile client is an exercise in frustration if you are playing timed games and down to the wire. Even when you aren't, your clock will jump on normal moves and it is most obvious during the opening.
This could also be due to general poor network code as well. The number of errors I get during puzzles is also frustrating. Do they really not retry a send automatically?? <breath>
Chess.com has the brand and the names... but dang, the tech feels SO rough to me.
This is one of the many, many things but imo it's the most telling. They can't even add a clock counting down the 6 minutes to their web client.
TBH this is what I expected for all online chess. How else to reconcile the two players' differing clocks and also prevent client-side cheating?
I do recognize that fps games utilize predictive algorithms and planning to estimate future player positions but still, turn based networking with 100ms accuracy should be a solved problem
Is there a point in preventing cheating, really? I can just make a bot...
Seems right.
If you export/download games from lichess, they use the .pgn (Portable Game Notation) format, which is a standard plain-text format circa 1993, used by pretty much everyone for describing a chess game.
Lichess follows the specification to the letter, and as it only technically allows one-second accuracy, lichess only record moves with one-second accuracy. It seems insane, but that's how they do it.
Chess.com also exports PGN files, but they add a decimal place, allowing subsecond accuracy. No one has a problem with this. There is no software which cannot handle this. But Lichess refuses to "break" the spec.
lichess PGN export example:
> 1. d3 { [%eval -0.15] [%clk 0:01:00] } 1... g6 { [%eval 0.04] [%clk 0:01:00] }
Chess.com PGN export example:
> 1. d4 {[%clk 0:02:58.6]} 1... b6 {[%clk 0:02:59.2]}
According to this blog post, this doesn't appear to be the case since at least 2017:
https://lichess.org/@/lichess/blog/a-better-game-clock-histo...
"Move times are now stored and displayed with a precision of one tenth of a second. The precision even goes up to one hundredth of a second, for positions where the player had less than 10 seconds left on their clock."
It's super annoying and the reason I only play blitz+ on chesscom.
[1]https://www.chess.com/forum/view/help-support/mate-in-one-qu...
chess.com confirmed the issue.
Interesting that they accumulate and periodically store game state. Unfortunately it is not very clear, where they store ongoing game state - in redis or on server itself. Also cost breakdown doesn't have server for redis, only for DB.
BTW, their github has better architectural picture, than overly simplified one in the article: https://raw.githubusercontent.com/lichess-org/lila/master/pu.... Unfortunately, I'm afraid, drawing something like that during interview may not land a job at faang =(
Note that they have cost per game fairly low: $0.00027, 3,671 games per dollar.
Their cost breakdown, for ones who are curious https://docs.google.com/spreadsheets/d/1Si3PMUJGR9KrpE5lngSk...
p.s. I'm not saying that Lichess's approach is the best or faang is the worst. Remember, lichess had 10 hours outage exactly because of the architecture chosen (single datacenter dependency). https://lichess.org/@/Lichess/blog/post-mortem-of-our-longes... . And outages like that are exactly the reasons why multi-datacenter and multi-region architectures are drilled down into faang engineers.
My point is is that there are cases when this approach is legit, but typical interview is laser focused on different things, and most probably won't appreciate the "old style" approach to the problem. I'm sure that if Thibault will ever decide to land in faang he will neither do whiteboard coding nor system design.
Yet another reason to be skeptical of the quality of hiring in faang if anything.
Just a wild guess: might be intended to lower the implementation barrier for new open-source software clients on new platforms, and/or preempt them from implementing subtle logic bugs that only show up much later.
The rules of chess are a bit tedious to implement, and you can easily get tired and code an edge-case bug that's almost invisible. Lichess itself did this—it once had a logic error that affected a very tiny number (exactly 7) of games,
https://github.com/lichess-org/database/issues/23 ("Before 2015: Some games with illegal moves were recorded")
(I apologize I couldn't find the specific patch that fixed this)
Naturally, it's not possible to view this move anymore, but this game (https://lichess.org/XDQeUk6j#48) has everything up until the last legal move right before the illegal castling happened.
Also that linked game is pretty entertaining. It's not a good game, but it can be fun watching lower ranked players make moves that you'd never see in higher level games. Like, who plays Bb5+ against the Scandinavian? Amazing stuff.
(the broken code checked that the only pieces on the king's path to its new position were kings and rooks of the appropriate color)
Deleted Comment
Validating a submitted move is distinct from listing valid moves. I assumed the server would need to validate regardless of providing a list to the client.
A bit of surprise consideration … is that even common in these days of overfancy web sites.
I will have to take a look, because whatever it's doing, it works very well!
Well, except for that one major outage where everything shit the bed due to some misconfiguration of IP multicast in the datacenters, or so I was told.
So, maybe if your mission isn't life critical, you can just wrongfully assume exactly-once delivery.
[1]: https://en.wikipedia.org/wiki/Pragmatic_General_Multicast
To do that, the server needs some measure of “how long does the client think the player actually took to make a move”, to later subtract latency not attributable to actual thinking from the clock.
I tried this and not all the messages I sent arrived.
Saw CF had some paying solution, but was wondering about a free solution
Cloudflare has a lower latency product called Argo Smart Routing [1]. When we tried Argo in 2020, we still saw 10+ ms increased latency across the board, which is unacceptable for competitive multiplayer games. That said, Discord voice still (or used to) uses Argo for voice, so there are certainly less latency-sensitive games where it would work well.
The other issue with sockets over Cloudflare (circa 2020 on business plan) is they get terminate liberally with the assumption you have a reconnection mechanism in place. I'd imagine this is acceptable for traditional WebSocket use cases, but not for games.
Services like OVH & Vultr also advertise "DDoS protection for games," but I've found these to be pretty useless in practice. We can only measure traffic that reaches our game servers, so I have no way of knowing if they're actually helping at all.
Your best bet is getting familiar with iptables and fine-tuning rules to match your game's traffic patterns. Thankfully, LLMs are pretty good at generating these rules for you nowadays if you're not already familiar with these tools. Make sure to set up something like node-exporter to be able to monitor attacks and understand where things go wrong. There have been a few other posts on HN in the past that go into more depth about game server DoS mitigation [2] [3].
I built something in the same vein for my startup (Apache 2.0 OSS, steal our code!) [4] that runs a series of load balancers in front of game servers in order to act like a mini-Cloudflare. In addition to the basics I already listed, we also have logic under the hood that (a) dynamically routes traffic to load balancers and (b) autoscales hardware based on traffic in order to absorb attacks. We're rolling out a dynamic bot attack & mitigation mechanism soon to handle more complex patterns.
[1] https://www.cloudflare.com/application-services/products/arg...
[2] https://news.ycombinator.com/item?id=35771466
[3] https://news.ycombinator.com/item?id=28675094
[4] https://github.com/rivet-gg/rivet