Most of my clients don't even bother/care and their competition doesn't bother/care either.
It's still the same exact format as previously but the tech we're using has gotten a lot more advanced.
Let us know how you want the format to evolve and we'll work on it!
https://web.archive.org/web/20250818232649/https://www.newgr...
It has not been expensive to operate so far. If it ever changes we can think about rate limiting it.
We used GPT4o because it seemed like a decent general default model. Considering adding an openrouter interface to a smorgasbord of additional LLMS.
One day, on a plane with WiFi before paying, I noticed that DNS queries were still allowed and thought it would be nice to chat with an LLM over it.
We are not logging anything but OpenAI must be...
Full context > human context capacity > LLM context capacity.
We should all be able to agree on this and it should settle the debates around the efficacy of vibe coding.