I would caution Facepunch though that what made their past games a success wasn't perfection. In the case of Gmod I would actually say imperfection was the charm.
>Obviously this isn't the Source 2 code, that's up to Valve to open source if they want.
Does this mean you need Source 2 to develop with S&box?
The game reminds me of sitting down at a poker table in a casino. It's very unforgiving - you grind, invest a lot of time, and make calculated bets as to whether you can win or lose a raid, but you can instantly lose everything in a failed raid.
I wish someone would make a browser-based version that was fun to play, and I've thought about it for some time, but the struggle is scoping an MVP that is as compelling given the constraints (eg a 2d or top-down version makes it harder to do things like build multi-story buildings and raid them).
I also appreciate the AI search results a bit when im looking for something very specific (like what the yaml definition for a docker swarm deployment constraint looks like) because the AI just gives me the snippet while the search results are 300 medium blog posts about how to use docker and none of them explain the variables/what each does. Even the official docker documentation website is a mess to navigate and find anything relevant!
The problem isn't just that ads can't be served. It's that every technical measure to attempt to block their service produces new ways of misleading website owners and the services they use. Perplexity refuses any attempt at abuse detection and prevention from their servers.
None of this would've been necessary if companies like Perplexity would've just acted like a responsible web service and told their customers "sorry, this website doesn't allow Perplexity to act on your behalf".
The open protocol you want already exists: it's the user agent. A responsible bot will set the correct user agent, maybe follow the instructions in robots.txt, and leave it at that. Companies like Perplexity (and many (AI) scrapers) don't want to participate in such a protocol. They will seek out and abuse any loopholes in any well-intended protocol anyone can come up with.
I don't think anyone wants Cloudflare to have even more influence on the internet, but it's thanks to the growth of inconsiderate AI companies like Perplexity that these measure are necessary. The protocol Cloudflare proposes is open (it's just a signature), the problem people have with it is that they have to ask Cloudflare nicely to permit website owners to track and prevent abuse from bots. For any Azure-gated websites, your bot would need to ask permission there as well, as with Akamai-gated websites, and maybe even individual websites.
A new protocol is a technical solution. Technical solutions work for technical problems. The problem Cloudflare is trying to solve isn't a technical problem; it's a social problem.
I’m not here to propose a solution. I’m here as an end-user saying I won’t go back to the old experience which is outdated and broken.
Cloudflare is not the gatekeeper, it's the owner of the site that blocks Perplexity that's "gatekeeping" you. You're telling me that's not right?
https://blog.cloudflare.com/perplexity-is-using-stealth-unde...
Perhaps a way to serve ads through the agents would be good enough. I'd prefer that to be some open protocol than controlled by a company.
Playwright MCP has been a big help for frontend work. It gives the agent faster feedback when debugging UI issues. It handles responsive design too, so you can test both desktop and mobile views. Not sure if you know this, but Claude Code also works with screenshots. In some cases, I provide a few screenshots and the agent uses Playwright to verify that the output is nearly pixel perfect. It has been invaluable for me and is definitely worth a try if you have not already.
- Ability to clearly define requirements up front (the equivalent mistake in coding interviews is to start by coding, rather than asking questions and understanding the problem + solution 100% before writing a single line of code). This might be the majority of the interview.
- Ability to anticipate where the LLM will make mistakes. See if they use perplexity/context7 for example. Relying solely on the LLM's training data is a mistake.
- A familiarity with how to parallelize work and when that's useful vs not. Do they understand how to use something like worktrees, multiple repos, or docker to split up the work?
- Uses tests (including end-to-end and visual testing)
- Can they actually deliver a working feature/product within a reasonable amount of time?
- Is the final result looking like AI slop, or is it actually performant, maintainable (by both humans and new context windows), well-designed, and follows best practices?
- Are they able to work effectively within a large codebase? (this depends on what stage you're in; if you're a larger company, this is important, but if you're a startup, you probably want the 0->1 type of interview)
- What sort of tools are they using? I'd give more weight if someone was using Claude Code, because that's just the best tool for the job. And if they're just doing the trendy thing like using Claude Agents, I'd subtract points.
- How efficient did they use the AI? Did they just churn through tokens? Did they use the right model given the task complexity?
For some new stuff I'm working on, I use Rails 8. I also use Railway for my host, which isn't as widely-used as a service like Heroku, for example. Rails 8 was just released in November, so there's very little training data available. And it takes time for people to upgrade, gems to catch up, conversations to bubble up, etc. Operating without these two MCP servers usually caused Claude Code to repeatedly stumble over itself on more complex or nuanced tasks. It was good at setting up the initial app, but when I started getting into things like Turbo/Stimulus, and especially for parts of the UI that conditionally show, it really struggled.
It's a lot better now - it's not perfect, but it's significantly better than relying solely on its training data or searching the web.
I've only used Claude Code for like 4 weeks, but I'm learning a lot. It feels less like I'm an IC doing this work, and my new job is (1) product manager that writes out clear PRDs and works with Claude Code to build it, (2) PR reviewer that looks at the results and provides a lot of guidance, (3) tester. I allocate my time 50%/20%/30% respectively.