> Gemini 2.5 Pro is incredible at coding, so we’re excited to bring it to Google AI Studio’s native code editor. It’s tightly optimized with our Gen AI SDK so it’s easier to generate apps with a simple text, image, or video prompt. The new Build tab is now your gateway to quickly build and deploy AI-powered web apps. We’ve also launched new showcase examples to experiment with new models and more.
This is exactly what I see coming, between the marketing and reality of what the tool is actually able to deliver, eventually we will reach the next stage of compiler evolution, directly from AI tools into applications.
We are living through a development jump like when Assembly developers got to witness the adoption of FORTRAN.
Language flamewars are going to be a thing of the past, replaced by model wars.
It migth take a few cycles, it will come nonetheless.
I agree. Until about 2005 it was code-on-device and run-on-device. The tools and languages were limited in absolute capabilities, but easy to understand and use. For about the past 20 years we've been in a total mess of code-on-device -> (nightmare of deployment complexity) -> run-on-cloud. We are finally entering the code-on-cloud and run-on-cloud stage.
I'm hoping this will allow domain experts to more easily create valuable tools instead of having to go through technicians with arcane knowledge of languages and deployment stacks.
Having worked on expert systems the difficulty in creating them is often the technical limitations of the end users. The sophistication of tooling needed to bridge that gap is immense and often insurmountable. I see the AI as the bridge to that gap.
That said it seems like both domain expertise and the ability to create expert systems will be commoditized at roughly the same time. While domain experts may be happy that they don’t need devs they’ll find themselves competing against other domain experts who don’t need devs either.
Finally, companies can wrench back control from those pesky users. Only Google should have root; any other interaction should be routed through their AI! You wouldn't want to own your own device anyways, just rent it!
> This is exactly what I see coming, between the marketing and reality of what the tool is actually able to deliver, eventually we will reach the next stage of compiler evolution, directly from AI tools into applications.
Is this different from other recent models trained eg for tool calling? Sounds like they fine tuned on their SDK. Maybe someday, but it's still going to be limited in what it can zero shot without you needing to edit the code.
> Language flamewars are going to be a thing of the past, replaced by model wars.
This does seem funny coming from you. I feel like you'll still find a way :P
I think there will still need to be some kind of translation layer besides natural language. It's just not succinct enough (especially English, ew), especially where it matters like a rules engine. The thought of building something like an adjudication or payment system with a LLM sounds terrible.
You don't need to use natural language to write your rules engine. LLMs speak every language under the sun, real or made up.
You could define your rules in Prolog if you wanted - that's just as effective a way to communicate them to an LLM as English.
Or briefly describe some made-up DSL and then use that.
For coding LLMs the goal is to have the LLM represent the logic clearly in whatever programming language it's using. You can communicate with it however you want.
I've dropped in screenshots of things related to what I'm building before, that works too.
This is why I think Rabbit is one of the most interesting startups around. If I could wave a wand and go pick any startup to go work at, it would be Rabbit.
Gemini 2.5 will write a whole Linux kernel from scratch! We are seeing a paradigm shift! This is bigger than the invention of electricity! Awesome times to be alive!
Presumably Google AI Studio[1] and Google Firebase Studio[2] are made by different teams with very similar pitches, and Google is perfectly happy to have both of them exist, until it isn't:
- AI Studio: "the fastest place to start building with the Gemini API"
- Firebase Studio: "Prototype, build, deploy, and run full-stack, AI apps quickly"
…if you do it before publicly releasing and spending marketing budget on both products, giving them a full software lifecycle and a dedicated user-base that no longer trusts you to keep things running.
Honestly, even in that case it sucks to be a developer there knowing there’s a 50% chance that the work you did meant nothing.
I've used a bunch of these, and they're different with different use-cases, but your point still stands about them being confusing. It seems many AI companies see a world where we have lots of small mini-apps written by LLMs, so why not integrate that into a variety of tools, I guess?
AI Studio: Mostly a playground for building mini-apps that integrate with the Gemini APIs. A big sell seems to be that you don't need an API key, instead you just build your app for testing, and the access is injected somehow. The UI is more stripped down than an IDE and I assume you'd only use it to prototype basic things. I don't know why there are "deployment" options in the UI, frankly.
Firebase Studio: Mostly a sales funnel for firebase I assume, but this is a "tradition" prototyping/development tool that uses AI to make a product. It supports front end and backend code. This also has a chat bot, but it's more of a web-IDE than a chat-first interface.
Gemini Canvas: This is gemini-the-chatbot writing mini-web-apps in a side-panel. The use case seems to be visualization and super basic prototyping. I've used it to make super simple bespoke tools like a visualizer for structured JSON objects for debugging, or an API tester. The HTML is served statically from a google domain, and you can "remix" versions created by others with your own prompts.
Jules - Experimental tool that writes code in existing codebases by handling full "tickets" or tasks in one go. Never used it, so i don't know the interface. I think it's similar to Codex though.
Gemini Code Assist - their version of a copilot. I think its also integrated or cross-branded with
Vertex AI API, Gemini API - these are just APIs for models.
I recently tried to understand the AI products listed in the cloud console. That was not an easy task, despite them clearly having made great pains to clean it up.
The ability to seamlessly integrate generated images is fascinating. Although it currently takes too long to really work in a game or educational context.
As an experiment I just asked it to "recreate the early RPG game Pedit5 (https://en.wikipedia.org/wiki/Pedit5), but make it better, with a 1970s terminal aesthetic and use Imagen to dynamically generate relevant game artwork" and it did in fact make a playable, rogue-type RPG, but it has been stuck on "loading art" for the past minute as I try to do battle with a giant bat.
This kind of thing is going to be interesting for teaching. It will be a whole new category of assignment - "design a playable, interactive simulation of the 17th century spice trade, and explain your design choices in detail. Cite 6 relevant secondary sources" and that sort of thing. Ethan Mollick has been doing these types of experiments with LLMs for some time now and I think it's an underrated aspect of what they can be used for. I.e., no one is going to want to actually pay for or play a production version of my Gemini-made copy of Pedit5, but it opens up a new modality for student assignments, prototyping, and learning.
Doesn't do anything for the problem of AI-assisted cheating, which is still kind of a disaster for educators, but the possibilities for genuinely new types of assignments are at least now starting to come into focus.
I love this, and as for AI-assisted cheating, I would make it such that the student can use any tool whatsoever under the sun, but then needs to do a live in-person presentation on it followed by 10 minutes of Q&A. Some are better bullshitters than others, but you'll still see a very clear difference between those who actually worked and those who had the work done for them.
Yes, I think this kind of combination is where higher ed is going to land. I've been talking to a colleague lately about how social skills and public speaking just got more important (and are things we need to focus on actually teaching). Likewise, I think self-directed, individualized humanistic research is currently not replicable by AI nor likely to be - for instance, generating an entirely new historical archive by conducting oral history interviews. Basically anything that involves operating in the physical world and deploying human emotional skills.
The unsolved issue is scale. 5-10 minute Q&As work well, but are not really doable in a 120 student class like the one I'll be teaching in the fall, let alone the 300-400 student classes some colleagues have.
Just as a side note, I ended up turning the 1970s RPG type game it originally made into a text-based RPG where you play as Henry James in 1889, kind of fun! Curious if the link actually works:
Why did they hide the model thought details? Thoughts look like this now:
Analyzing TypeScript Errors
I'm currently focused on the actions/user.ts file, where I've identified a few issues. Specifically, I need to address a "Cannot find name" error related to UpdateUserDetailsFormState, and confirm that the intended target is UpdateUserDetailsFormInput. There might be some other discrepancies in the file that I need to resolve.
Debugging the Import
I've been trying to pinpoint the Cannot find name 'UpdateUserDetailsFormState' error. The type is definitely defined and imported, but it's not being recognized in the updateUserDetails function's arguments. I'm suspecting a scoping issue with the import statement within actions/user.ts. I also need to verify that UpdateUserDetailsFormState is correctly defined with the fieldValues property as optional as per the schema.
Someone else said in this case that you can access it with premium subscription but I have also heard that some products are hiding the CoT because of concerns about distillation
Seeing these announcements make me nervous. I feel like I found some sort of cheat code by using AI Studio for free. Seeing them build it out, makes me wonder when they are going to start charging for it. Though Grok has been very generous as an alternate. I guess there's a lot of good options out there. I'm just used to hitting limits most places, and not as good models.
Agree. And for some reason I find responses from AI Studio is much better than Gemini for the same models. I _already have_ Gemini advanced, bit still mostly use AI studio just for the quality of the responses.
It's just a copy of all other models in terms of functionality. I didn't find anything controversial nor extra-ordinary in it. Image generation sucked.
Some Indian twitterers found a way to get it to utter Hindi profane words, that's probably the most controversial thing I know about it.
Wasn't it Google's models that showed America's founding father's as being black women? They all have their issues. I just want to get things done, before AI just takes over everything.
Get outta here! You can do that in AI Studio now? If so, I need to run, not walk, to the nearest computer. Too bad I am sitting on the toilet right now..
This is exactly what I see coming, between the marketing and reality of what the tool is actually able to deliver, eventually we will reach the next stage of compiler evolution, directly from AI tools into applications.
We are living through a development jump like when Assembly developers got to witness the adoption of FORTRAN.
Language flamewars are going to be a thing of the past, replaced by model wars.
It migth take a few cycles, it will come nonetheless.
I'm hoping this will allow domain experts to more easily create valuable tools instead of having to go through technicians with arcane knowledge of languages and deployment stacks.
That said it seems like both domain expertise and the ability to create expert systems will be commoditized at roughly the same time. While domain experts may be happy that they don’t need devs they’ll find themselves competing against other domain experts who don’t need devs either.
Sounds like an absolute nightmare for freedom and autonomy.
Is this different from other recent models trained eg for tool calling? Sounds like they fine tuned on their SDK. Maybe someday, but it's still going to be limited in what it can zero shot without you needing to edit the code.
> Language flamewars are going to be a thing of the past, replaced by model wars.
This does seem funny coming from you. I feel like you'll still find a way :P
You could define your rules in Prolog if you wanted - that's just as effective a way to communicate them to an LLM as English.
Or briefly describe some made-up DSL and then use that.
For coding LLMs the goal is to have the LLM represent the logic clearly in whatever programming language it's using. You can communicate with it however you want.
I've dropped in screenshots of things related to what I'm building before, that works too.
Context windows are still tiny by "real world app" standards, and this doesn't seem to be changing significantly.
- AI Studio: "the fastest place to start building with the Gemini API"
- Firebase Studio: "Prototype, build, deploy, and run full-stack, AI apps quickly"
[1] https://aistudio.google.com/apps
[2] https://firebase.google.com/
"this is brilliant! I'll assign multiple teams to the same project. Let the best team win! And then the other teams get PIP'd"
…if you do it before publicly releasing and spending marketing budget on both products, giving them a full software lifecycle and a dedicated user-base that no longer trusts you to keep things running.
Honestly, even in that case it sucks to be a developer there knowing there’s a 50% chance that the work you did meant nothing.
Canvas: "the fastest place to start building with the Gemini APP"
Also, did you hear about Jules?
AI Studio: Mostly a playground for building mini-apps that integrate with the Gemini APIs. A big sell seems to be that you don't need an API key, instead you just build your app for testing, and the access is injected somehow. The UI is more stripped down than an IDE and I assume you'd only use it to prototype basic things. I don't know why there are "deployment" options in the UI, frankly.
Firebase Studio: Mostly a sales funnel for firebase I assume, but this is a "tradition" prototyping/development tool that uses AI to make a product. It supports front end and backend code. This also has a chat bot, but it's more of a web-IDE than a chat-first interface.
Gemini Canvas: This is gemini-the-chatbot writing mini-web-apps in a side-panel. The use case seems to be visualization and super basic prototyping. I've used it to make super simple bespoke tools like a visualizer for structured JSON objects for debugging, or an API tester. The HTML is served statically from a google domain, and you can "remix" versions created by others with your own prompts.
Jules - Experimental tool that writes code in existing codebases by handling full "tickets" or tasks in one go. Never used it, so i don't know the interface. I think it's similar to Codex though.
Gemini Code Assist - their version of a copilot. I think its also integrated or cross-branded with
Vertex AI API, Gemini API - these are just APIs for models.
Deleted Comment
Why does Google suck so much at product management?
As an experiment I just asked it to "recreate the early RPG game Pedit5 (https://en.wikipedia.org/wiki/Pedit5), but make it better, with a 1970s terminal aesthetic and use Imagen to dynamically generate relevant game artwork" and it did in fact make a playable, rogue-type RPG, but it has been stuck on "loading art" for the past minute as I try to do battle with a giant bat.
This kind of thing is going to be interesting for teaching. It will be a whole new category of assignment - "design a playable, interactive simulation of the 17th century spice trade, and explain your design choices in detail. Cite 6 relevant secondary sources" and that sort of thing. Ethan Mollick has been doing these types of experiments with LLMs for some time now and I think it's an underrated aspect of what they can be used for. I.e., no one is going to want to actually pay for or play a production version of my Gemini-made copy of Pedit5, but it opens up a new modality for student assignments, prototyping, and learning.
Doesn't do anything for the problem of AI-assisted cheating, which is still kind of a disaster for educators, but the possibilities for genuinely new types of assignments are at least now starting to come into focus.
The unsolved issue is scale. 5-10 minute Q&As work well, but are not really doable in a 120 student class like the one I'll be teaching in the fall, let alone the 300-400 student classes some colleagues have.
https://aistudio.google.com/app/prompts?state=%7B%22ids%22:%...
Analyzing TypeScript Errors
I'm currently focused on the actions/user.ts file, where I've identified a few issues. Specifically, I need to address a "Cannot find name" error related to UpdateUserDetailsFormState, and confirm that the intended target is UpdateUserDetailsFormInput. There might be some other discrepancies in the file that I need to resolve.
Debugging the Import
I've been trying to pinpoint the Cannot find name 'UpdateUserDetailsFormState' error. The type is definitely defined and imported, but it's not being recognized in the updateUserDetails function's arguments. I'm suspecting a scoping issue with the import statement within actions/user.ts. I also need to verify that UpdateUserDetailsFormState is correctly defined with the fieldValues property as optional as per the schema.
Definitely a downgrade over the old version, though really it’s just Google deciding to offer less for free.
Running LLMs costs a stupid amount of money, beyond just the stupid amount of money to train them. They have to recoup that money somewhere.
I don't need it inserting console.logs and alert popups with holocaust denials and splash screens with fake videos of white genocide in my apps.
Some Indian twitterers found a way to get it to utter Hindi profane words, that's probably the most controversial thing I know about it.
"Te harsh jolt of the cryopod cycling down rips you"
"ou carefully swing your legs out"
I find this really interesting that it's like 99% there, and the thing runs and executes, yet the copy has typos.
Deleted Comment
https://www.youtube.com/watch?v=Dd06Md1xOd0