Tip 1, it consistently ignores my GEMINI.md file, both global and local. Even though it's always saying that "1 GEMINI.md file is being used", probably because the file exists in the right path.
Tip 12, had no idea you could do this, seems like a great tip to me.
Tip 16 was great, thanks. I've been restarting it everytime my environment changes for some reason. Or having it run direnv for me.
All the same warnings about AI apply for Gemini CLI, it hallucinates wildly.
But I have to say gemini cli gave me my first real fun experience using AI. I was a late comer to AI, but what really hooked me was when I gave it permission to freely troubleshoot a k8s PoC cluster I was setting up. Watching it autonomously fetch logs, objects, troubleshoot until it found the error was the closest thing to getting a new toy for christmas for me in many years.
So I've kept using it, but it is frustrating sometimes when AI is behaving so stupid you just /quit and do it yourself.
In my limited testing, I found that Gemini 3 Pro struggles with even simple coding tasks. Sure, I haven't tested complex scenarios yet and have only done so via Antigravity. But it is very difficult to do that with the limited quota it provides. Impressions here - https://dev.amitgawande.com/2025/antigravity-problem
Personally, I consider Antigravity was a positive & ambitious launch. Initial impression was that there are many rough edges to be smoothed out. I hit many errors like 1. communicating with Gemini (Model-as-a-Service) 2. Agent execution terminated due to errors, etc., but somehow it completed the task (verification/review UX is bad).
Pricing for paid plans with AI Pro or Workspace would be key for its adoption, when Gemini 3.x and Antigravity IDE are ready for serious work.
> I haven't tried complex coding tasks using Gemini 3.0 Pro Preview yet. I reckon it won't be materially different.
Gemini CLI is open source and being actively developed, which is cool (/extensions, /model switching, etc.). I think it has the potential to become a lot better and even close to top players.
The correct way of using Gemini CLI is: ABUSE IT! With 1M Context Window (soon to be 2M) and generous daily (free) quota are huge advantages. It's a pity that people don't use it enough (ABUSE it!). I use it as a TUI / CLI tool to orchestrate tasks and workflows.
> Fun fact: I found Gemini CLI pretty good at judging/critiquing code generated by other tools LoL
Recently I even hook it up with homebrew via MCP (other Linux package managers as well?), and a local LLM powered Knowledge/Context Manager (Nowledge Mem), you can get really creative abusing Gemini CLI, unleash the Gemini power.
I've also seen people use Gemini CLI in SubAgents for MCP Processing (it did work and avoided polluting the main context), can't help laughing when I first read this -> https://x.com/goon_nguyen/status/1987720058504982561
So I did it on a laptop. The process seemed legit, the entire flow was weird and not intuitive, I had to stop and read twice before proceeding (e.g. "Where to store passkey", disable all other MFA ans only use Security Key, a backup recovery code was given...). After going through all that, find myself locked out of X because of the infinite re-enroll loop, OMG.
Contacted support, let's see how long it takes. After this, I don't think I'll continue to use Security Key with X...
Text message and Authenticator were disabled, two Yubikeys present in Security Keys. I don't get the idea of this process.
So I did it on a laptop. The process seemed legit, the entire flow was weird and not intuitive, I had to stop and read twice before proceeding (e.g. "Where to store passkey", disable all other MFA ans only use Security Key, a backup recovery code was given...). After going through all that, find myself locked out of X because of the infinite re-enroll loop, OMG.
Contacted support, let's see how long it takes. After this, I don't think I'll continue to use Security Key with X...
What's more amazing these days is that technology like `bootmod3` (bm3) makes flashing (remmaping) stage one as easy as 1-2-3. One needs to understand what they are doing though.
One thing about Elastic is that their roots are in on-prem / self-managed software and selling support to enterprise customers. This led to our cloud strategy being based around ECE (Elastic Cloud Enterprise), with the idea we would eventually fully unify this on-prem version of our Cloud product with our actual SaaS, and just run ECE "at scale". During that time we got stuck in the slower Elasticsearch "quarterly minor + monthly patch" release cycle (SaaS did have a shorter one but it was also troubled) and spent countless engineering effort troubleshooting enterprise customer's own infrastructure (imagine stuff like "ohhh, I see, you V-Motioned a server hosting ZooKeeper containers, and you're running on spinning disks" after 2 weeks+ of back and forth). We couldn't easily add table-stakes features to our SaaS because we needed it to run on-prem too, even though ECE is very limited in the types of supporting infrastructure we could add (basically just ZooKeeper and Elasticsearch). I think they are trying to move past this strategy and onto a SaaS-only K8s based approach but I fear too much time was squandered. I hope I'm proven wrong.
Fortunately, decision makes heard the voice from the field and customers, eventually offloaded the container orchestration layer (and underlying infrastructure) to managed k8s service provider, the solution is delivered as helm charts to be installed on customers' own managed k8s (EKS, AKS, GKE and OpenShift - oh, the Red Hat OpenShi(f)t is just another rabbit hole...). But again, lack of knowledge and hands-on skill operating / running k8s (not yet a commodity although it is hyped to be...) makes the journey quite turbulent from a business PoV (technically it's easy, built the skills in house, hire the right talents).