I think reading docs, understanding a new system which someone else has designed, and fitting one's brain into _their_ organisational structure is the hard part. Harder than designing one's own system. It's the reason many don't stick with an off-the-shelf app. Including Org mode.
This is also why it's so difficult to get teams on the same page about project management in their respective workplaces.
We use a Team plan ($500 /mo), which includes 250 ACUs per month. Each bug or small task consumes anywhere between 1-3 ACUs, and fewer units are consumed if you're more precise with your prompt upfront. A larger prompt will usually use fewer ACUs because follow-up prompts cause Devin to run more checks to validate its work. Since it can run scripts, compilers, linters, etc. in its own VM -- all of that contributes to usage. It can also run E2E tests in a browser instance, and validate UI changes visually.
They recommend most tasks should stay under 5 ACUs before it becomes inefficient. I've managed to give it some fairly complex tasks while staying under that threshold.
So anywhere between $2-6 per task usually.
I'm curious, this is js/ts? Asking because depending on the lang, good old machine refactoring is either amazeballs (Java + IDE) or non-existent (Haskell).
I'm not js/ts so I don't know what the state of machine refactoring is in VS code ... But if it's as good as Java then "a couple of sentences" is quite slow compared to a keystroke or a quick dialog box with completion of symbol names.
It's not always right, but I find it helpful when it finds related changes that I should be making anyway, but may have overlooked.
Another example: selecting a block that I need to wrap (or unwrap) with tedious syntax, say I need to memoize a value with a React `useMemo` hook. I can select the value, open Quick Chat, type "memoize this", and within milliseconds it's correctly wrapped and saved me lots of fiddling on the keyboard. Scale this to hundreds of changes like these over a week, it adds up to valuable time-savings.
Even more powerful: selecting 5, 10, 20 separate values and typing: "memoize all of these" and watching it blast through each one in record time with pinpoint accuracy.
This alone is where I get a lot of my value. Otherwise, I'm using Cursor to actively solve smaller problems in whatever files I'm currently focused on. Being able to refactor things with only a couple sentences is remarkably fast.
The more you know about your language's features (and their precise names), and about higher-level programming patterns, the better time you'll have with LLMs, because it matches up with real documentation and examples with more precision.
it just means that they could be forced to defend those decisions in court, which is good and exactly the sort of thing that courts are supposed to decide.
I get the feeling many companies would find it easier to allow payment processors to censor something if the product isn't earning them much anyway.
"That's one of our least popular items we sell so honestly we don't really care..."
Which is within the right for the reseller to decide, but it does nothing for protecting access to a product that's otherwise only available on a select few digital storefronts.
Then it becomes an issue for the game studio, who may not have the funding to fight a case to remain available. And then you have a situation where the game studio has become a victim of a payment processor's conspiracy theory that they're tied to fraud.
At my company, our React SPA uses just over 500+ components across maybe ~30 views, with many smaller components alongside many of them in the same files. Our bundle size is obscene, but our customers are using fast PCs on fast connections. Once it's cached, they're good -- and each new release is roughly every 2-4 weeks, so fresh visits are painless.
As for server-side rendering from RSC, Next, Nuxt, and SvelteKit, I'd say they're overkill in most situations. The amount of people deploying simple sites on Vercel using Next.js, not really using its features, is just silly (and costly).
The cost of speculating a highly interactive site that users will want to interact with is a suboptimal baseline experience until you get there. Provided you get there without losing users, it'll still be a subset of your audience that use your thing in a way that truly offsets all of these costs. If that works for your product/business, then awesome! But if these costs are too high and will hurt your product, then you need to be a lot more deliberate about how you engineer it.
I typically approach the problem first and see what tools work best to solve the problem, rather than work backwards from my preferred set of tools.
> Once you've dug that hole, you either have to live with it, or face a painful rewrite later.
Based on the tools you mentioned familiarity with, wouldn't something like Astro be a happy middle ground? You start as an MPA by default and only add complexity to parts (entire routes or parts of a page) of the application that require it? Also, this hole goes both ways. If you've built your site as a SPA, and you realise that your product just needs to be HTML to stay competitive, it's a painful road to unpick the layers of abstraction you've bought if you don't wanna rewrite it.
> dynamic features, anything from basic showing / hiding of content, list / configuration-based rendering, or requesting data
What would a highly dynamic feature be in your opinion and how does a SPA framework help you? All of the examples mentioned here, in my opinion, are fantastic candidates for server side templating and progressive enhancement. I don't need see the need for the SPA architecture here.
> mobile devices in low-bandwidth areas
Curious to know what kind of devices are present in your area? The implications of larger applications is both network and CPU [1], so if you live in a relatively wealthy area (say over 75% of users had iPhones) you'd notice the negative effects of too much JS less. If you're in the public sector or building for the public, then you can't get away with the excuse that people on slow devices and networks aren't your target audience; you need to meet everyone where they're at. A HTML-first architecture is a better and more inclusive baseline for all.
[1] https://infrequently.org/2024/01/performance-inequality-gap-...
Absolutely, but it is additional boilerplate I have to worry about that serves as another layer alongside whatever SPA framework I prefer; whereas spinning up a Vite template with Vue, writing a few routes + components, and getting it built + deployed is easier. It's at such a low cost (bundle size, time, boilerplate) that the optimizations gained from Astro aren't even in my purview unless I'm experiencing a significant enough scale that justifies static rendering.
When my apps are feeling and behaving perfectly already, why optimize? It only comes with increased scope and cost to bend around another layer.
> If you've built your site as a SPA, and you realize that your product just needs to be HTML to stay competitive
This does go both ways, yes, but with SPAs, I have more flexibility around client-side features like transitions, animations, pagination for smaller datasets, and state-based views that depend on a lot of different interactions, user selections, etc. that all drive what gets displayed instantly, without making a single network request just to complete the interaction. The Venn diagram between the two shows SPAs doing more out of the box, at such a low cost to commit.
Whereas with MPAs and server-side templating, a better-feeling UI isn't a luxury you can enjoy without some form of JavaScript to wire up page interactions. Given these are often a requirement, you'll find yourself haphazardly adding an SPA library to an existing MPA to enhance the perceived UX that it couldn't provide. Then you're not only juggling 2 scripting languages, but also reconciling the separation of concerns between your client/server now.
Although, happy-mediums have emerged: 37signals created Hotwire to help bridge this gap, Laravel has a similar library Intertia.js, and htmx+Alpine is another interesting solution.
Lastly on this point, Svelte's build step strips your SPA into the smallest possible runtime. Svelte was actually built for creating SPAs intended to run on low-power devices [1] typically seen in places like LATAM. React also has an official compiler that memoizes component data and callbacks. Preact is a 3kb alternative to React (40kb) for low-power devices, sharing the same APIs with only a few minor differences.
React, Vue, and Svelte all have server-side rendering where you can comfortably write sites/apps that serve up smaller bundles and stream updates to the client as needed, without sacrificing a pleasant UX. So, to answer your question "what if your SPA needs to be HTML to stay competitive?" -- these are all the most sustainable paths with the lowest costs.
>> You may be asking, "why don't you learn + use some of the alternatives you listed?"
As any reasonably tired, middle-aged engineer would say: I'm happy with the language(s), tools, libraries, and frameworks I'm comfortable using. The incessant drive to learn new tools that pretty much do the same thing with dubious (and likely negligible) benefit at smaller scale feels like a distraction when I could just be building and shipping something with familiar tech that thousands of other developers enjoy using. I also benefit from the community support as well, not to mention, AI tools are best used in TypeScript codebases currently.
Server-side templating requires dedicated hosting / VPS, which is a whole 'nother layer of deploying, managing, and debugging. Static sites and SPAs can be hosted with a static host for free (unless you scale like crazy), and any data-related needs are solved with serverless services like Supabase, Convex, lambdas/functions, and any number of others -- all with generous free plans and some level of portability.
In my case, I'm only developing SaaS applications not needing mobile support, that are only used by companies that buy decent computers for their employees. Otherwise I'm building small webapps for personal use and some that may become a hit among niche communities. Even my personal site is a tiny Vue app, and only because it's familiar and easy to change. The performance fiends can cry all they want that it isn't purely static, and in many ways, that amuses me.
[1] https://developer.mozilla.org/en-US/docs/Learn_web_developme...
The claim isn't "we don't like it", the claim is "this is damaging to society".
I don't agree with such things in many cases (and many people disagree with me when I'm the one saying something is damaging to society), but it's important to note the difference or you will always be arguing against something other than their claim.
> No one, including governments or payment processors, should be in the position to decide whether a platform can sell something or not.
It's kinda the job of the government to decide such things; but an automatic extension of that is, it's not the job of the payment processors… and I think they should be banned from doing so because it's damaging to society to let them take on this role.
That said, I don't agree with censorship and especially by payment processors of all groups. The slippery slope is very concerning for adults who would enjoy any other category of content that are targeted by activist groups. Collective Shout has a history attacking media falling outside the porn bubble.
IMO it will be hard for some traditional sites to adapt to the new browser capabilities, since we've built an entire ecosystem around SPAs. The author's advice should've been: use the browser's built-in capabilities instead of client-side libraries whenever possible.
Also, keep in mind he's sharing his own experience, which might be different from ours. I've used some great SPAs and some terrible ones. The bad ones usually come down to inexperience from developers and hiring managers who don't understand performance, don't measure it, don't handle errors properly, and ignore edge cases.
Some devs build traditional sites as SPAs and leave behind a horrible UX and tech debt the size of Mount Everest. If you don't know much about software architecture, you're more likely to make mistakes, no matter what language or framework you're using.
I realised years ago there's no "better" language, framework, platform, or architecture, just different sets of problems. That's why developers spend so much time debating implementation details instead of focusing on the actual problems or ideas. And that's fine, debates can be useful as long as we don't lose sight of what we're trying to solve and why.
For example: Amazon's developers went with an MPA. Airbnb started as an MPA but now uses a hybrid approach. Google Maps was built as an SPA, while the team behind Search went with an MPA.
It's usually inevitable, so it's easier to scaffold a Vite template and get cracking without any additional setup. The time-to-deploy is fast using something like Netlify or Vercel, and then I have the peace of mind knowing I can add additional routes or features in a consistent, predictable way that fits the framework's patterns.
I'd hate to develop an MPA and realize after the fact that now I need complex, shared data across routes and for the UX to be less disrupted by page loads. Once you've dug that hole, you either have to live with it, or face a painful rewrite later.
The exception I often see is targeting mobile devices in low-bandwidth areas where larger application bundles take longer to load, but I have to wonder how often this is the target audience. I live in a place where mobile data speeds are horrible, and access to WiFi is abundant, but even so, I rarely have a situation where I *need* to load up a non-critical site on the go — and having apps pre-installed doesn't really help either when the data they're requesting is barely trickling in on mobile.
So this oft-used exception doesn't really make sense to me unless I learned why this is so critical for some people.