I wasn't quite sure if this qualified as "Show HN" given you can't really download it and try it out. However, dang said[0]:
> If it's hardware or something that's not so easy to try out over the internet, find a different way to show how it actually works—a video, for example, or a detailed post with photos.
Hopefully I did that?
Additionally, I've put code and a detailed guide for the netboot computer management setup on GitHub:
https://github.com/kentonv/lanparty
Anyway, if this shouldn't have been Show HN, I apologize!
But, in fact, some friends who regularly attended LAN parties in the Bay Area moved to Austin around the same time we did. And some others are also willing to travel for New Year's.
(Most parties are just local people, of course.)
I'm in a completely different part of the world, but for similar reasons I ended up with a few friends in tech who moved to the same part of the world - and I've also met similar profiles to ours, attracted by the same reasons.
I think the lack of any standing offerings of variations of Quake is a glaring mistake but easily rectified. :)
It's really heartening to see lan gaming continued and offered in such a way that the amount of hassle and setup is minimized and the gaming is maximized. We spent far too much time in the 90's and 2000's dealing with driver issues, etc etc. Bravo.
There was always someone who would just be totally unable to connect with someone else.
It is amazing to think how much IPv4 and IPv6 "just work" in comparison.
The tabletops also seems a bit too thin and wiggly for my taste, but, honestly, for LAN parties with chill people you personally know — it's ok
As for the actual host setup with a singular disk image — great job! LAN gaming centres do something similar with their setups, with some differences (a lot of centres either use Windows-based diskless solutions that mount vhdx files as drives remotely over iSCSI, or use ZFS-based snapshotting, which is my personal favourite)
But all in all, seems like my dream house :)
I own a chain of LAN gaming centres, so the feedback is definitely skewered into the business perspective quite a bit
For one, if you get a bunch of nerds together a sizable fraction are likely to have sensory issues- and won’t come again if you don’t make it welcoming for them
Some video games require some sound as it shares information, but can usually be configured to only have those sounds, or to turn on an accessible visual indicator
Each computer has a sound bar and everyone just uses that. Yeah, that means everyone's sound gets mixed up and you don't get positional audio, but in practice it's fine and we'd rather be able to yell at each other.
That might or might not be due to the games we've mostly been playing on our LAN parties are coming from a bit different profile than "chill co-op" — more MOBAs or tactical / arena shooters. In those styles of games visual cues don't really help and not having the clear audio puts you at a disadvantage
The music is still playing in the background, though — the headsets are not 100% soundproof and you may still easily communicate via VoIP
Yeah, the "live talking" aspect without headsets isn't there, but I've found it doesn't bother me in the slightest. You still are in the same room, you get the "shoulder sense" of your team there, you still celebrate and have fun as one and lose as one singular organism, and that's the feeling I've kinda been chasing on my LAN parties and in my LAN centre
I ended up putting together my own thing. I saw various products that seemed like they might be what I wanted but they always seemed... sketchy.
CCBoot is a Windows Server-based diskless solution I mentioned, and they also provide CCDisk, which can do "hybrid" mode — where there is a small SSD in every PC with base OS pre-installed and pre-configured, which then mounts an iSCSI game drive
GGRock is a fantastic product, in my opinion. It is pricy, but where as CCBoot relies heavily on knowing it's inner workings, GGRock is pretty much turnkey solution
There is also CCu Cloud Update, which I have heard of, but didn't try myself, since they sell licenses only in Asia, from what I remember
LANGAME Premium is an addon for LAN centre ERP system, which is basically an ITAAS solution based on TrueNAS. Of all paid offerings that one is my favourite so far — but you have to use their ERP and actually run a business for it to be cost-effective
NetX provides an all-in-one (router, traffic filter and iSCSI target) NUC-like server with pre-configured software on a subscription basis. I am most skeptical of that just on the basis that, from my research, two NVMe drives can't really handle the load from a fully occupied 40+ machines LAN centre. Not for a long time, at least
...and homebrew, of course. I myself am running a homebrew ZFS-based system which I'm extremely happy with
In your case, I'd go with building my own thing too. Does not take a lot of time if you know the inner workings and you have no additional OPEX for your room :)
I never had the tenacity to consider my build "finished," and definitely didn't have your budget, but I built a 5-player room[1] for DotA 2 back in 2013.
I got really lucky with hardware selection and ended up fighting with various bugs over the years... diagnosing a broken video card was an exercise in frustration because the virtualization layer made BSODs impossible to see.
I went with local disk-per-VM because latency matters more than throughput, and I'd been doing iSCSI boot for such a long time that I was intimately familiar with the downsides.
I love your setup (thanks for taking the time to share this BTW) and would love to know if you ever get the local CoW working.
My only tech-related comment is that I will also confirm that those 10G cards are indeed trash, and would humbly suggest an Intel-based eBay special. You could still load iPXE (I assume you're using it) from the onboard NIC, continue using it for WoL, but shift the netboot over to the add-in card via a script, and probably get better stability and performance.
[1]: https://imgur.com/a/4x4-four-desktops-one-system-kWyH4
Yeah I'm pretty sure my onboard 10G Marvell AQtion ethernet is the source of most of my stability woes. About half the time any of these machines boot up, Windows bluescreens within the first couple minutes, and I think it has something to do with the iSCSI service crashing. Never had trouble in the old house where the machines had 1G network -- but load times were painful.
Luckily if the machines don't crash in the first couple minutes, then they settle down and work fine...
Yeah I could get higher-quality 10G cards and put them in all the machines but they seem expensive...
I think another issue is the limited amount of PCI-E lanes now that HEDT is dead. I picked up a 5930k for my build at the time for its 40 PCI-E lanes. But now consumer CPUs basically max out at 20-24 lanes.
Also with the best CPUs for gaming nowadays being AMD's X3D series because of its additional L3 cache, I wonder about the performance hit with 2 different VMs fighting for cache. Maybe the rumored 9950X3D will have 2 3D caches and you'd be able to pin the VMs to each CPU cores/cache. The 7950X3D had 3D cache only on half of its cores, so games generally performed better pinned to only those cores.
So with only 2-3 VMs/PC, and you still needing a GPU for each VM which are the most expensive part anyway, I'd pay a bit more to do it without VMs. The only way I'd be interested in multiseat VM gaming again would be if I could utilize GPU virtualization: split up a single GPU into many VMs. But like you say in the article that's usually been limited to enterprise hardware. And even then it'd be interesting only for the flexibility, being able to run 1 high-end GPU for when I'm not having a party.
I bet one could put an unreasonable amount of effort into convincing an Nvidia Bluefield card to pretend to be a disk well enough to get Windows to mount it. I imagine that AWS is doing something along those lines too, but with more cheap chips and less Nvidia markup…
There has got to be a way to convince Windows to do an overlay block device that involves magic words like “thin provisioning”. But two seconds of searching didn’t find it. Every self-respecting OS (Linux, FreeBSD, etc) has had this capability for decades, of course. Amusingly, AFAICT, major clouds also mostly lack this capability — performance of the obvious solution in AWS (boot everything off an AMI) is notoriously poorly performing.
The good intel 10G cards were not expensive at all by the way, I bought them for later additions, and they were cheaper than the premium we paid for the money-gamer motherboards that included 10G cards that I saw you were unhappy about too.
Bulk buying is probably hard, but ex-enterprise Intel 10G on eBay tends to be pretty inexpensive. Dual spf+ x520 cards are regularly available for $10. Dual 10g-base-t x540 cards run a bit more, with more variance, $15-$25. No 2.5/5Gb support, but my 10g network equipment can't do those speeds either, so no big deal. These are almost all x8 cards, so you need a slot that can accomidate them, but x4 electrical should be fine (I've seen reports that some enterprise gear has trouble working properly in x1/x4 slots beyond bandwidth restrictions which shouldn't be a problem; if a dual port card needs x8 and you only have x4 and only use a single port, that should be fine)
I think all of mine can pxeboot, but sometimes you have to fiddle with the eeprom tools, and they might be legacy only, no uefi pxe, but that's fine for me.
And you usually have to be ok with running them with no brackets, cause they usually come with low profile brackets only.
If you make a decision on a 10G card (SFP or ethernet) I'd like to hear what you picked.
No need to buy new for most computing equipment unless you're looking for the absolute latest and greatest.
> onboard 10G Marvell AQtion ethernet
I had similar problems with an Aquantia 10GbE NIC (which AQtion appears to be the rebranded name for, post-acquisition by Marvell), and it turned out to be the network chip overheating because it was poorly thermally bonded to a VRM heatsink that defaulted to turning on at something like 90C. Adding a thicker thermal pad and setting the VRM fan to always be on at 30% solved my problems.
Another data point that it is indeed possible. I had a dual Xeon E5-2690 v2 setup with two RX 580 8GB cards passed through to separate VMs, and with memory and CPU pinning it was a surprisingly resilient setup. 150+ FPS in CSGO with decent 1% lows (like 120 if I remember correctly?) which was fine since I only had 60Hz monitors. I have a Threadripper workstation now, I should test out to see what kind of performance I can get out of that for VM gaming...
> Yeah I could get higher-quality 10G cards and put them in all the machines but they seem expensive...
I have had very good luck with Intel X540 cards. $20-40 on eBay, and there’s hundreds (if not thousands) available. They’re plug-and-play on any modern Linux, but need an Intel driver on windows if I remember correctly. I’ve never had one die and I’ve never experienced a crash or network dropout in the 9 years I’ve been running them. The Marvell chipset just seems terrible, unfortunately - I’ve had problems with it on multiple different cards and motherboards on every OS under the sun.
Aside from all of the extremely epic technology and whatnot - I have got to say, the elevated view and outlook of your place is sensational. Congratulations on putting together such a terrific place to raise a family.
Oh and worth mentioning; I sincerely appreciated and enjoyed reading your comprehensive Q&A section beyond the images (which themselves, had really awesome annotations included). Thanks for sharing!
My setup only supported 12, but was designed in a way where you could have 3 teams of 4 or 4 teams of 3 that got their own private area so they could more easily conspire against their opponents.
I think my most interesting design choice was that I had half the machines routed from the attic and half routed from the basement. Part of this had to do with retrofitting my setup into a house over 100 years old, but I thought it also worked very well. If I were to do it again, I'd probably mount all of the computers in the basement, since it would provide extra heat (for the house) in the winter and stay cooler (for the computers) in the summer when under load.
I have since moved, but haven't bothered to make it happen again. Life with kids is too busy, and I've largely abandoned the hobby because I believe it would not be a positive influence given the particular quirks of my children's personalities. Slowly easing back into the waters with board games, though.
From the description this sounds like the most elaborate setup I've heard of aside from my own.
ZIRP certainly had something to do with this too! Don’t overlook ridiculous fiscal and monetary policy.
Henry George begs to differ. He would say that you start with The One Tax and the resulting pressure on zoning will be unbearable. Good reading: "Land is a Big Deal" by Lars Doucet.
Mmm... if you introduced higher taxes for anyone who owns multiple houses and has rental income for one or more of those homes and you eliminated from the current tax code their ability to claim deductions, it would definitely move the needle on housing prices and availability.
Dead Comment
The global housing crisis is the result of international organised crime owning or operating most of the large construction conglomerates, using real estate as a fiat currency to wash the proceeds from all their illicit business, and (org crime infested) private equity companies cashing in on the former situation, pumping assets by buying up available real estate just to make it unavailable.
CRIME is the real reason worldwide for people not being able to afford a house.
I’m also fairly convinced he didn’t capture one tenth of one percent of the value he created, so I’m not sure how anyone can argue this is ‘unfair’.
Either way, people like me aren’t going to be able to capture even a tenth of the success of joining Google in 2005 or buying a $1m house in Palo Alto ~4 years after graduating (I’m 6.5 years out of graduating) because people like me aren’t as human as the folks that own this house.
(I want both, but I don't want more taxes to solve the housing problem, because they won't.)
Unless you're talking about a new kind of wealth tax, but those aren't particularly popular...
Have I missed something in this conversation?
Deleted Comment
> Keyboard: Logitech K120 Wired — The world's cheapest keyboard at $13 a pop. Works perfectly fine for all gaming needs.
I can’t imagine playing stuff like overwatch on a membrane office keyboard for $13 when having spent more than 100k on the setup. Especially when cheap mechanical keyboards are not that much more expensive either.
That said, guests are welcome to bring any peripherals they want. There's a USB hub at each station to plug stuff in.
Using the 2.4GHz logi bolt usb receiver when I'm on PCs or server (way easier than bringing cables to the garage), and bluetooth for my phone or Steam Deck. I was initially repelled by the half-size arrow keys, for use in terminal or certain games that don't use WASD, but I made up my mind, and I'm really fine like this. Hope it lasts, but generally Logitech peripherals do.
I also have to switch peripherals from gaming PC to work laptop every day, so wireless really helps put less cables on my desk. And I can bring it with me should I need to keyboard away from home, but usually I'm AFK when not at home.
Edit: kentonv replied answered before I hit submit. BYOK/M if you want, nice.
Me, I bought a mechanical keyboard but I despise it. Switched to a Logitech Keys.
I use TTC Silent Bluish White switches which produce a muted "thock" sound, rather than the loud "clickety-clack" that you're probably thinking of. They're only slightly louder than a typical membrane keyboard.