Also if I wanted to go AI route, I can ask Cloude Opus or GPT4 : "Here is an endpoint that returns this type of data <paste data type here>, make me a React component that fetches this data and presents it in searchable table using shadcn"
It will get me 80-90% there, it would need just a little manual tweaking to conform to the current project code standards.
Some reasons listed below:
1. Gpt hallucinates shadcn's big data table implementation more often than not. Also, you don't have props for everything by default (page size, filtering, sticky columns etc)
2. We have certain rules we ask our AI to stick to, such as always putting a table in a card and consistent padding. So there's lesser cognitive effort in you thinking about how to style your tools.
3. Current LLMs are shaky with next14 right now, especially on where to use server and client components.
Well you need to setup boilerplate, buy a CRM, buy an email marketing tool, etc. and like 30% of companies think "oh this kind of sucks" and make one of those steps their company.
Nothing against the project but certainly one of those.
In particular I think it overestimates then pain they solve and underestimates both modern frameworks eg. T3 stack, and in particular how difficult getting something through enterprise procurement.
I can't imagine it being easier to get this bought than to do that stuff yourself, particularly when it makes you look bad as a dev to need help with the basics.
We may have underplayed a little, only because we have existing users already finding value with our current setup.
The goal is more like v0.dev, where with a prompt, you can generate your entire internal tool. We think we're not too far away from this (as shown in the YouTube demo).