Especially with Gemini Pro when providing long form textual references, providing many documents in a single context windows gives worse answers than having it summarize documents first, ask a question about the summary only, then provide the full text of the sub-documents on request (rag style or just simple agent loop).
Similarly I've personally noticed that Claude Code with Opus or Sonnet gets worse the more compactions happen, it's unclear to me whether it's just the summary gets worse, or if its the context window having a higher percentage of less relevant data, but even clearing the context and asking it to re-read the relevant files (even if they were mentioned and summarized in the compaction) gives better results.
Stock your underground bunkers with enough food and water for the rest of your life and work hard to persuade the AI that you're not a threat. If possible, upload your consciousness to a starwisp and accelerate it out of the Solar System as close to lightspeed as you can possibly get it.
Those measures might work. (Or they might be impossible, or insufficient.) Changing your license won't.
Shout out to 'The man in seat 61', couldn't have done it without it.
At 10Gbps it would take 3.4 seconds just to serialize the frame.
How did you go about validating the game is fun? Did you end up having an intuitive sense for it, or did you need external feedback to refine the mechanics?
My steam deck might as well be a billionaire/balatro machine.