This is pretty fascinating and comes with some complicated AI-world incentives that I've been ruminating on lately. The better you document your work, the stronger contracts you define, the easier it is for someone to clone your work. I wouldn't be surprised if we end up seeing open source commercial work bend towards the SQLite model (open core, private tests). There's no way Cloudflare could have pulled this off without next's very own tests.
Speaking more about the framework itself, the only real conclusion I have here is that I feel server components are a misunderstood and under-utilized pattern and anyone attempting to simplify their DX is a win in my book.
Next is very complex, largely because it has incrementally grown and kept somewhat backwards compatible. A framework that starts from the current API surface and grows can be more malleable and make some tough decisions here at the outset.
Crazy to see it's already being run on a .gov domain[0]. TTFGOV as a new adoption metric?
> The better you document your work, the stronger contracts you define, the easier it is for someone to clone your work.
Well said; this is my thinking as well. One person or organization can do the hard work of testing multiple approaches to the API, establishing and revising best practices, and developing an ecosystem. Then once things are fairly stable and well-understood, another person can just yoink it.
I have little empathy for Vercel, and here they're kind of being hoist by their own petard of inducing frustration in people who don't use their hosting; but I'm concerned about how smaller-scale projects (including copyleft ones) will be laundered and extinguished.
> Then once things are fairly stable and well-understood, another person can just yoink it.
That transparency & availability for community contributions or forks is the point of open-source.
If you're only using open-source as marketing because you're bad at marketing, then you should probably go closed source & find a non-technical business partner.
Whoever "yoinks" the package runs into the same problem because they now have to build credibility somehow to actually profit from it.
> There's no way Cloudflare could have pulled this off without next's very own tests.
I'm very uncovinced. History showed us very complex systems reverse engineered without access to the source code. With access to the source code, coupled with the rapid iteration of AI, I don't see any real moat here; at best a slight delay.
Source code is one thing; tests covering the codebase are another.
And if you just copy the source code or translate it one-to-one into a new language, rather than make a behavioral copy, there will be copyright issues.
The tests are absolutely essential, otherwise there's no signal to guide the LLM towards correct behavior and hallucinations accumulate until any hope of forward progress collapses.
> I wouldn't be surprised if we end up seeing open source commercial work bend towards the SQLite model (open core, private tests).
Wouldn't this just mean that actual open source is the tests? or spec? or ... The artifact which acts as seed for the program, what ever that ends up being?
I'm not sure about this. LLMs can extract both documentation and tests from bare source code. That said I think you're correct that having an existing quality test suite to run against is a huge help.
Man, I love Next ... but I also love Vite ... and I hate the Next team, because they focus on fancy new features for 0.1% of their users, at the complete expense of the other 99.9% of the Next community (who they basically ignore).
This gives someone like me everything we want. Better performance is something the Next community has been begging for for years: the Next team ignored them, but not the Cloudflare team. Meanwhile Vite is a better core layer than the garbage the Next people use, but you still get the full Next functionality.
I wish Cloudflare the best of luck with this fork: I hope it succeeds and gets proven so I can use it at my company!
React was originally meant to be the 'V' in MVC. You can still use it that way and React becomes very simple when you only use it for UI. Why do data fetching in a React component?
Rails 8 is surprisingly good nowadays. It absolutely still has its share of problems (e.g. Bundler being slow, the frontend story being crappy without Inertia, lack of types which is a biggie, memory) but it is still a fantastic framework imo.
The basic premise of Next is good, but it definitely has more overhead that in should, has odd "middleware", and is very hard to optimize. I view this mostly as a React problem though since any page requires full hydration and ships everything to the client. RSCs are... not my favorite for sure.
I too have been very frustrated by this, and I made an "Astro for dynamic sites" TypeScript framework called Hyperspan ( https://www.hyperspan.dev ) that aims to fill the gap in the JS ecosystem for a modern fully dynamic option that, similar to Astro, makes dynamic islands easy. I have enjoyed using it in all my own projects. Check it out if you want.
at my job we have some 7+ year old nextjs apps that don't receive new features but still do their jobs perfectly fine, and they keep changing random shit around for no reason, we've had to waste time on multiple refactors already for major nextjs version bumps once the older ones are no longer supported
Is there any front end framework that doesn't do this? I dropped out of the front end years ago, and it seems to just get worse every year with a profusion of confusion. Doesn't anyone yearn for back when we didn't have to build the front end at all?? Just emit some HTML and serve up some JS files from the backend, and everything just flows from there?
Someone go make an AI rewrite of Apache+Mod-PHP and sell it to zoomers as the hip new thing already please
Is there any reason to keep upgrading if the apps keep doing their jobs perfectly fine? Pull in a stable version of the framework and the associated docs and stay there.
I'm moderately hopeful that LLMs will help here because they lack the human motivations to needlessly mess around with stuff and over-complicate things.
what if we all move to vinext? I'm asking claude to migrate us in a git worktree using a team of agent, installed the vinext skill to help with that, it did it in 10min
also why do you need support? agents are the support
What is it you love about Next that isn’t tied to Vercel and isn’t available elsewhere? I love Next too but I find the value is inextricably linked to Vercel. I can’t imagine choosing to use Next if I’m not choosing it for Vercel’s fancy stuff.
Weird, I hate Next and I love Vite. We have a big (I mean _really_ big) production app that runs on Next.js at work and it's the slowest thing I've ever worked on. I had to upgrade my machine to an M4 Pro just to get local dev compile times down from 5-8 minutes to ~30-60 seconds per route. And my hot refreshes are down from ~15-20 seconds to 5-10. It's _bad_. All the Next.js team does is give you the run-around and link to their docs and say here, try these steps, you're probably doing something wrong, etc. Nope. The framework is just slow. They use simple toy apps to demo how fast it is, but nobody tells you how slow it is at scale.
If you are using webpack, see if you can make the switch to turbopack. It cut my build times from ~1 minute to 15 seconds, incremental builds are down from 10 seconds to 2. Memory usage is down a ton as well. But if you rely on webpack plugins this may not be an option for you.
It may be sacrilege to bring it into this conversation, but I've spent the last year building a fairly large community site in Nuxt, vite has been wonderful, though I prefer vue over react. I am a little annoyed I paid for NuxtUI Pro like 3 months before it became free, but whatever.
Yeah, Vercel should have done this with NextJS a while ago. There is a reason why quite literally every other framework uses Vite because it amazing, easy to use, and easy to extend.
I mean you don't want really want to use javascript for the backend anyway... What's the problem with just using vite and any backend of your choosing?
I wonder to what extent you should say you "rebuilt" something when the most basic hello world example doesn't work. And I wonder to what extent it makes sense to call it "from scratch" if you inherit a battle tested extensive test suite from the thing you're rebuilding, and the thing you're rebuilding is part of the training data.
Here's the first paragraph of Harry Potter and the philosopher's stone. I rewrote it from scratch, apparently:
Mr. and Mrs. Dursley, of number four, Privet Drive, were proud to say that they were perfectly normal, thank you very much. They were the last people you’d expect to be involved in anything strange or mysterious, because they just didn’t hold with such nonsense. Mr. Dursley was the director of a firm called Grunnings, which made drills. He was a big, beefy man with hardly any neck, although he did have a very large mustache.
I find it interesting that they bought Astro (https://blog.cloudflare.com/astro-joins-cloudflare/), which from my definitely-not-a-frontend-person perspective seems to tackle a similar problem to Next. A month ago.
If it is so cheap to make something that they recommend using (rather than a proof of concept), why buy Astro (presumably it was more expensive than the token cost of this clone?).
One conclusion is that, at the organisational level, it still makes sense to hire the “vision” behind the framework, rather than just clone it. Alternatively, maybe AI has improved that much in 1 month!
I'm very patient with the ai-led porting projects since they're revealed with a big engagement splash on social media. Could it be durable? sure but I doubt anyone is in that much of a rush to migrate to a project built in a week either.
I view it as a long-overdue exit ramp for maintainers of Next.js-based webapps to extricate themselves from its overly-opinionated and unnecessarily-tightly-coupled build tooling. Being stuck on webpack/rspack and unable to leverage vite has been a huge downside to Next.js. It's a symptom of Vercel's economic incentives. This project fixes it in one fell swoop. I predict it hurts Vercel but saves Next.js.
Astro is a different paradigm. Acquiring Astro gives Cloudflare influence over a very valuable class of website, in the same way Vercel has over a different class from their ownership of Next.js. Astro is a much better fit for Cloudflare. Next.js is very popular and god awful to run outside of Vercel, Cloudflare aren’t creating a better next.js, they’re just trying to make it so their customers can move Next.js websites from Vercel to Cloudflare. Realistically, anyone moving their next.js site to Cloudflare is going to end up migrating to Astro eventually.
Astro isn’t solving the same surface as next. Astro is great for static sites with some dynamic behavior. The same could be said about next depending on how you write your code, but next can also be used for highly dynamic websites. Using Astro for highly dynamic websites is like jamming a square peg into a round hole.
We use Astro for our internal dev documentation/design system and it’s awesome for that.
I think they just want steer users/developers to CF products, maybe not? It is interesting to see the two platforms. I've moved to svelte, never been a frontend person either but kind of enjoying it actually.
Astro has "server islands" which rely on a backend server running somewhere. If 90% of the page is static but you need some interactivity for the remaining 10%, then Astro is a good fit, as that's what makes it different than other purely static site generators. Unlike Next.js, it's also not tied to React but framework-agnostic.
Anyways, that's why it's a good fit for Cloudflare: that backend needs to be run somewhere and Astro is big enough to have some sort of a userbase behind them that Cloudflare can advertise its service to. Think of it more as a targeted ad than a real acquisition because they're super interested in the technology behind it. If that were the case, they could've just forked it instead of acquiring it.
From Astro's perspective, they're (presumably) getting more money than they ever did working on a completely open source tool with zero paywalls, so it's a win-win for both sides that Cloudflare couldn't get from their vibe-coded project nobody's using at the moment.
This is probably the most interesting AI experiment I've seen yet. Looking through the codebase has me wondering where all the code is. I don't know if anyone has had the displeasure of going through the next.js codebase, but I estimate it's at least two orders of magnitude more code than this reimplementation. Which makes me wonder, does it actually handle the edge cases or does it just pass the tests.
Like compare the two form implementations for example. Vinext is a completely different implementation compared to what the Next.js version does. Is their behaviour actually the same? The rewrite looks incredibly naive.
The behavior isn't entirely the same and reaching 100% parity is a non-goal, but there are a few things to note.
This is still a very early implementation and there are undoubtedly issues with the implementation that weren't covered in next's original test suite (and thus not inherited) while not being obvious enough to pop up with all the apps we've tried so far.
As for why it's so much smaller, by building on top of Vite and their react + rsc plugins there is a whole lot of code that we don't need to write. That's where a significant portion of the LOC difference comes from.
> The result, vinext (pronounced "vee-next"), is a drop-in replacement for Next.js
"Drop-in" in my mind means I can swap the next dependency for the vinext dependency and my app will function the same. If the reality is that I have to spend hours or days debugging obscure edge cases that appear in vinext, I wouldn't exactly call that a drop-in replacement. I understand that this is an early version and that it doesn't have parity yet, but why state that it is a non-goal? For many of us, that makes vinext a non-choice, unless we choose to develop for vinext from the beginning.
Furthermore, if you're making a tool that looks almost like a well-known and well-documented tool, but not quite, how is gen AI going to be able to deal with the edge cases and vinext-specific quirks?
Yeah I'm curious about all the routing edge cases, form actions, server functions etc, since that is where most of the complexity of the app router comes from. Does it encrypt captured values inside closures sent to the client? Stuff like that.
It is the most passive aggressive thing I’ve ever seen. Cloudflare team had issues with the Next team? And they responded with ‘we can do your whole product with an intern and AI’, lol.
Yeah, I'm not even sure what this is, or whether or not this is even intended to be a serious project. It just comes across to me as deeply unprofessional. I say this as someone who doesn't even like Vercel and has their own gripes with that bizarrely run company.
The docs say that none of the code has been reviewed or tested properly, so how serious is this for people to run in a production setup where many companies are going are not going to be super enthused that their production environment has been 'vibecoded' in a week? It just reads to me like a not-so-subtle middle finger to the Vercel guys.
On a side note, I find it extremely weird that the current AI era of software development is turning the state of the entire field into a reenactment of Lord of the Flies. Bizarre behavior all around from people who should know better.
Nextjs had remote code execution vulnerabilities because of how they implemented react server side. I am not touching an AI version without waiting for a while.
Thank you. This is the part that shocks me the most. I was always wary of Next.js for this exact reason (in fact, I refused to use it for personal projects before the RCE because I was scared that I would make a mistake and leak server-side data to the client.
Bugs like this are easy to happen and even easier to miss if you’re generating thousands of lines of code with AI.
It was a vulnerability that only could exist due to the incestuous relationship between React and Vercel. It was something Vercel has been trying to heavily push into React for years (which is why they hired previous react core team members).
I'm deeply skeptical of the "X reimplemented and it was super easy" thing.
The devil is in the detail.
So many edge cases unlikely to be there.
So many details or fine details unlikely to be there.
Years of bug fixes.
If it is literally a drop in replacement and it passes all the tests, and you're replicating something with and extremely thorough test suite, then sure I'll give you the benefit of the doubt.
Otherwise, I don't believe people "rebuilt X product in a week".
God speed to the poor souls that have to make it actually work in the long run:
"I can say that with some authority. Yes, I'm the one who wrote most of this project, but I'm also the director in charge of the entire Cloudflare Workers org, almost 80+ people at this point. I'm not just an IC engineer who has to find time to justify continuing to work on this. I can literally put people on it, and I've already been talking to the team about how to do exactly that."
I don't necessarily buy it either, but TFA talks about the test suite. They basically pulled 2k unit tests and 400 E2E tests from Next and made sure they all passed.
> Most abstractions in software exist because humans need help. We couldn't hold the whole system in our heads, so we built layers to manage the complexity for us.
Kind of a sloppy statement, but I don't think it's accurate to say abstraction or layering exists in software just because humans need help comprehending it. Abstractions often exist to capture the essence of some aspect of the real world, and to allow for software reuse. AIs will still find reusing software useful? Secondly, you equate "abstractions" with "layers" which aren't really the same thing. Layers are more about separation of concerns. Maybe it could be argued layering is a type of abstraction.
Speaking more about the framework itself, the only real conclusion I have here is that I feel server components are a misunderstood and under-utilized pattern and anyone attempting to simplify their DX is a win in my book.
Next is very complex, largely because it has incrementally grown and kept somewhat backwards compatible. A framework that starts from the current API surface and grows can be more malleable and make some tough decisions here at the outset.
Crazy to see it's already being run on a .gov domain[0]. TTFGOV as a new adoption metric?
[0] https://www.cio.gov/
Well said; this is my thinking as well. One person or organization can do the hard work of testing multiple approaches to the API, establishing and revising best practices, and developing an ecosystem. Then once things are fairly stable and well-understood, another person can just yoink it.
I have little empathy for Vercel, and here they're kind of being hoist by their own petard of inducing frustration in people who don't use their hosting; but I'm concerned about how smaller-scale projects (including copyleft ones) will be laundered and extinguished.
That transparency & availability for community contributions or forks is the point of open-source.
If you're only using open-source as marketing because you're bad at marketing, then you should probably go closed source & find a non-technical business partner.
Whoever "yoinks" the package runs into the same problem because they now have to build credibility somehow to actually profit from it.
I'm very uncovinced. History showed us very complex systems reverse engineered without access to the source code. With access to the source code, coupled with the rapid iteration of AI, I don't see any real moat here; at best a slight delay.
I have a demonstrated process here on my blog (all hand written without AI).
This bit about how to brute force decompilation: https://reorchestrate.com/posts/your-binary-is-no-longer-saf...
And this about how to do the conversion and address the LLM hallucination problem: https://reorchestrate.com/posts/your-binary-is-no-longer-saf...
Yes, it is absolutely possible.
And if you just copy the source code or translate it one-to-one into a new language, rather than make a behavioral copy, there will be copyright issues.
Wouldn't this just mean that actual open source is the tests? or spec? or ... The artifact which acts as seed for the program, what ever that ends up being?
Dead Comment
This gives someone like me everything we want. Better performance is something the Next community has been begging for for years: the Next team ignored them, but not the Cloudflare team. Meanwhile Vite is a better core layer than the garbage the Next people use, but you still get the full Next functionality.
I wish Cloudflare the best of luck with this fork: I hope it succeeds and gets proven so I can use it at my company!
React was originally meant to be the 'V' in MVC. You can still use it that way and React becomes very simple when you only use it for UI. Why do data fetching in a React component?
I too have been very frustrated by this, and I made an "Astro for dynamic sites" TypeScript framework called Hyperspan ( https://www.hyperspan.dev ) that aims to fill the gap in the JS ecosystem for a modern fully dynamic option that, similar to Astro, makes dynamic islands easy. I have enjoyed using it in all my own projects. Check it out if you want.
Someone go make an AI rewrite of Apache+Mod-PHP and sell it to zoomers as the hip new thing already please
also why do you need support? agents are the support
I just wrote an open letter to fix Next.js. The goal is to compile a list of needs for whoever decides to fix those in any way possible...
https://please-fix-next.com/
Everything just becomes a plugin.
Here's the first paragraph of Harry Potter and the philosopher's stone. I rewrote it from scratch, apparently:
Mr. and Mrs. Dursley, of number four, Privet Drive, were proud to say that they were perfectly normal, thank you very much. They were the last people you’d expect to be involved in anything strange or mysterious, because they just didn’t hold with such nonsense. Mr. Dursley was the director of a firm called Grunnings, which made drills. He was a big, beefy man with hardly any neck, although he did have a very large mustache.
The web browser from cursor didn't compile
The c compiler from anthropic couldn't build stdio
And now, the Next.JS clone from cloudflare can't do a hello world.
If it is so cheap to make something that they recommend using (rather than a proof of concept), why buy Astro (presumably it was more expensive than the token cost of this clone?).
One conclusion is that, at the organisational level, it still makes sense to hire the “vision” behind the framework, rather than just clone it. Alternatively, maybe AI has improved that much in 1 month!
Maybe I'm wrong. We'll see what happens a couple of years from now.
Thanks
We use Astro for our internal dev documentation/design system and it’s awesome for that.
It does not. Astro is more for static sites not dynamic web apps.
Deleted Comment
Deleted Comment
Anyways, that's why it's a good fit for Cloudflare: that backend needs to be run somewhere and Astro is big enough to have some sort of a userbase behind them that Cloudflare can advertise its service to. Think of it more as a targeted ad than a real acquisition because they're super interested in the technology behind it. If that were the case, they could've just forked it instead of acquiring it.
From Astro's perspective, they're (presumably) getting more money than they ever did working on a completely open source tool with zero paywalls, so it's a win-win for both sides that Cloudflare couldn't get from their vibe-coded project nobody's using at the moment.
Like compare the two form implementations for example. Vinext is a completely different implementation compared to what the Next.js version does. Is their behaviour actually the same? The rewrite looks incredibly naive.
https://github.com/vercel/next.js/blob/b8cbaad24ca66ec673a7b...
https://github.com/cloudflare/vinext/blob/main/packages/vine...
Either way, pretty impressive.
The behavior isn't entirely the same and reaching 100% parity is a non-goal, but there are a few things to note.
This is still a very early implementation and there are undoubtedly issues with the implementation that weren't covered in next's original test suite (and thus not inherited) while not being obvious enough to pop up with all the apps we've tried so far.
As for why it's so much smaller, by building on top of Vite and their react + rsc plugins there is a whole lot of code that we don't need to write. That's where a significant portion of the LOC difference comes from.
> The result, vinext (pronounced "vee-next"), is a drop-in replacement for Next.js
"Drop-in" in my mind means I can swap the next dependency for the vinext dependency and my app will function the same. If the reality is that I have to spend hours or days debugging obscure edge cases that appear in vinext, I wouldn't exactly call that a drop-in replacement. I understand that this is an early version and that it doesn't have parity yet, but why state that it is a non-goal? For many of us, that makes vinext a non-choice, unless we choose to develop for vinext from the beginning.
Furthermore, if you're making a tool that looks almost like a well-known and well-documented tool, but not quite, how is gen AI going to be able to deal with the edge cases and vinext-specific quirks?
Woah.
The docs say that none of the code has been reviewed or tested properly, so how serious is this for people to run in a production setup where many companies are going are not going to be super enthused that their production environment has been 'vibecoded' in a week? It just reads to me like a not-so-subtle middle finger to the Vercel guys.
On a side note, I find it extremely weird that the current AI era of software development is turning the state of the entire field into a reenactment of Lord of the Flies. Bizarre behavior all around from people who should know better.
I hope this becomes common practice. It might even work as an interview question for hiring new candidates.
Deleted Comment
"Use our proprietary SaaS and you too can approximate Next.js in 1/100 as much code using a bit of chicken wire and an LLM".
Whole thing sounded too good to be true, and it was.
Bugs like this are easy to happen and even easier to miss if you’re generating thousands of lines of code with AI.
The devil is in the detail.
So many edge cases unlikely to be there.
So many details or fine details unlikely to be there.
Years of bug fixes.
If it is literally a drop in replacement and it passes all the tests, and you're replicating something with and extremely thorough test suite, then sure I'll give you the benefit of the doubt.
Otherwise, I don't believe people "rebuilt X product in a week".
"I can say that with some authority. Yes, I'm the one who wrote most of this project, but I'm also the director in charge of the entire Cloudflare Workers org, almost 80+ people at this point. I'm not just an IC engineer who has to find time to justify continuing to work on this. I can literally put people on it, and I've already been talking to the team about how to do exactly that."
https://github.com/cloudflare/vinext/issues/21#issuecomment-...
Kind of a sloppy statement, but I don't think it's accurate to say abstraction or layering exists in software just because humans need help comprehending it. Abstractions often exist to capture the essence of some aspect of the real world, and to allow for software reuse. AIs will still find reusing software useful? Secondly, you equate "abstractions" with "layers" which aren't really the same thing. Layers are more about separation of concerns. Maybe it could be argued layering is a type of abstraction.