https://www.osnews.com/story/19921/full-text-an-epic-bill-ga...
https://us.macmillan.com/books/9780374615369/wheretheaxeisbu...
Then I get it to go through each section of the todo list and check each item off as it completes it. Generally results in completed tasks that stay on track but also means that I can stop half way through and go back to the tasks without having to prompt from the start again.
Gemini seems to now advise you when you’re telling it to do something that may not make sense - first time I’ve really seen a non-Yes Man LLM. It’s more like a Yes-but-are-you-sure man.
I'm surprised that this sort of pattern - you fix a bug and the AI undoes your fix - is common enough for the author to call it out. I would have assumed the model wouldn't be aggressively editing existing working code like that.
I've stopped using agent unless its for a POC where I just want to test an assumption. Applying each step takes a bit more time but means less rogue behaviour and better long term results IME.
I would imagine for those who want to are agencies / developers building apps who charge a fortune doesn't make sense anymore with tools like Replit.it, Bolt, Devin and now A0.
Great work Seth and Ayo for making it easier and potentially bringing to cost of building an app down close to free as I'm assuming this is now free as it is just a sign up.
Is there any pricing on this?