1. First authentication didn't work on my headless system, because it wants an oauth redirect to localhost - sigh.
2. Next, WebFetch isn't able to navigate github, so I had to manually dig out some references for it.
3. About 2 mins in, I just got ``` ℹ Rate limiting detected. Automatically switching from gemini-2.5-pro to gemini-2.5-flash for faster responses for the remainder of this session. ``` in a loop with no more progress.
I understand the tool is new, so not drawing too many conclusions from this yet, but it does seem like it was rushed out a bit.
Well, OpenAI, I think you are mixing up your own backend for economic growth with everyone’s!
Decades ago I worked for a classical music company, fresh out of school. "So.. how do you anticipate where the music trend is going", I once naively asked one of the senior people on the product side. "Oh, we don't. We tell people really quietly, and they listen". They and the marketing team spent a lot of time doing very subtle work, easily as much as anything big like actual advertisements. Things like small little conversations with music journalists, just a dropped sentence or two that might be repeated in an article, or marginally influence an article; that another journalist might see and have an opinion on, or spark some other curiosity. It only takes a small push and it tends to spread across the industry. It's not a fast process, but when the product team is capable of road-mapping for a year or so in advance, a marketing team can do a lot to prepare things so the audience is ready.
LLMs represent a scary capability to influence the entire world, in ways we're not equipped to handle.
replace LLMs with TV, or smartphones, or maybe even mcdonald's, and you've got the same idea. through TV, corporations got to control a lot of the social world and people's behavior.
- They now require a DUNS number to submit an app
- You now need 10-15 people to "QA" your app before submitting
- Now this.
It just seems that Google wants the "major" apps and nothing else.
Edit/Append: I've had this idea [1] forever (since the 1990s, possibly earlier... don't have notes going that far back). Imagine the simplest possible compute element, the look up table... arranged in a grid. Architectural optimizations I've pondered over time lead me to a 4 bits in, 4 bits out look up table, with latches on all outputs and a clock signal. This prevents race conditions by slowing things down. The gain is that you can now just clock a vast 2d array of these cells with a 2 phase clock (like the colors on a chessboard) and it's a universal computer, Turing complete, but you can actually think about it without your brain melting down.
The problem (for me) has always been programming it and getting a chip made. Thanks to the latest "vibe coding" stuff, I've gotten out of analysis paralysis, and have some things cooking on the software front. The other part is addressed by TinyTapeout, so I'll be able to get a very small chip made for a few hundred dollars.
Because the cells are only connected to neighbors, the runs are all short, low capacitance, and thus you can really, REALLY crank up the clock rates, or save a lot of power. Because the grid is uniform, you wont have the hours or days long "routing" problems that you have with FPGAs.
If my estimates are right, it will cut the power requirements for LLM computing by 95%.
[1] Every mention of BitGrid here on HN - https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...