Readit News logoReadit News
o11c · 9 months ago
Unrelated to the specific game:

Note that the "can't parallelize AI/scripting" is a consequence for a design choice that many people make without thinking - namely, that all actors should, internally, have perfectly accurate and up-to-date knowledge of the world, using the same in-game global objects.

If each actor makes a copy of the world for what they know, there's nothing preventing parallelism. This does imply quadratic memory, but you can just cap this - if there's a lot going on, it makes sense for an actor to lose track of some of it. And once you're working with imperfect knowledge, you can just ... throttle the AI's think time if it's doing too much.

Another thing you can do, assuming you already have suspendable scripts, is "merge" the thread affinity of two scriptable objects when you know they need to interact (only really needed for transaction-like things; often you can just emit some async state to be resolved later). Actually, you don't need to suspend if you have enough static analysis, but suspending is probably actually the easier thing to do.

Related to this, IMO it's a mistake to expose functions to the scripting API that access state implicitly. It's a better design to expose objects (that can only be accessed/mutated if passed as an argument) - and to expose a different set of objects depending on what context the script is called in. A type-checker is really useful here.

Aurornis · 9 months ago
> If each actor makes a copy of the world for what they know, there's nothing preventing parallelism

Unfortunately not that easy.

If two scripts consume an item, decrement a variable (hit points, credits, count) then it’s not as simple as giving each script a copy of the world. You get double spending, duplicated items, etc.

o11c · 9 months ago
The point of the copy is that they can't actually change anything outside of their local world. They can only send out an async request for global state to change, or send a request to be scheduled on the same executor.
dustbunny · 9 months ago
Another model may be that all actors read from frame N-1 and write to frame N. Then there's not many copies, just one.
o11c · 9 months ago
That's fine if actors are not doing much (e.g. just pathfinding and shooting), but is likely to fall apart long before scripting becomes a thing, since there are a lot of tasks where different actors do want to mutate the same object.
akrotkov · 9 months ago
Two, surely? The previous one still being used and the new one being written.

(Note that this is how most rendering artifacts were fixed long ago - the on screen and the off screen buffers were swapped, so nobody would "see" in progress scenes)

egypturnash · 9 months ago
"From the feedback on Profiling branch we have seen several times that players had issues caused by setting wrong command-line parameters. That was typically because an old optimization guide told them it would make their game run better. But now that the game is more multithreaded than it used to be, bad settings have a more noticeable and potentially negative impact."

I wonder if they also added a popup that detects some of these pessimal command-line parameters being used and says "hey we did a lot of optimizing work that's made these settings iffy, please go check out this blog post for more details".

hinkley · 9 months ago
> The design of the job system decides how we are able to utilize it, and the old design (in use since Arma 2) worked, but it was quite primitive. It only allowed submitting jobs as one block, and there could only be one block active at a time. This means all jobs had to be submitted at once, after which we needed to wait for the jobs to be finished, before we could continue doing other things.

> This is also called "Fork-Join".

I removed one of these from a batch processing system and got about 3x the throughput in the process.

As it was written, the system was trying not to be a noisy neighbor for online (consumer-facing) traffic and was not succeeding. It could still cause brownouts if too many other things were running at the same time. So the authors thought to have the users run Splunk queries, looking for traffic from these other jobs, to see if it was safe to run the CI/CD pipeline. That's ridiculous. It's the sort of thing someone Ops-facing forgets that nobody else on the team gives a shit about most days.

If we wanted people to be able to run it without consequence, it needed to go quite a bit slower, but it was already more than half of the wall clock time being spent in a runbook as it was. So I replaced it with a queue instead, that would have n tasks running all the time, instead of up to n and as few as 1. I ended up being able to tune it to about 75-80% of the original number with little problem. Well, one problem. There was one group of customers whose assets generated about 10x the load of any other customer, and the way we queued the work clustered by group. Once I sorted by customer instead of group we stopped getting a clustering of excess workload, and it also made it a hell of a lot easier to eyeball the status messages and estimate how far along you were.

Spiky processes cause problems for both load-shedding and autoscaling schemes. And starting too many tasks at once causes memory pressure on your end, since most data processing tasks take more memory in the middle than at the beginning or the end. You are better off self-throttling your traffic so you make the memory load more sequential and you allow compensatory systems the time they need to adapt appropriately to your sudden peak of traffic.

abrookewood · 9 months ago
FYI this 12 year old game is still incredible if you are in to mil sims. Really worth checking out and amazing that they are still releasing updates for it.
absurdo · 9 months ago
Ah good ol’ Operation Flashpoint: Cold War Crisis. What an absolute time to be alive.