> React is now using the words "server" and "client" to refer to a very specific things, ignoring their existing definitions. This would be fine, except Client components can run on the backend too
It was hard discussions to come up for naming of these things in the beginning. Even calling them "backend" and "frontend" (as they suggest in article) wasn't clear about their behavior semantics. I understand the naming annoyances but it's a complex issue that requires lots more thought than just "ah we should've called it like this"
> …This results in awkwardly small server components that only do data fetching and then have a client component that contains a mostly-static version of the page.
> // HydrationBoundary is a client component that passes JSON > // data from the React server to the client component. return <HydrationBoundary state={dehydrate(queryClient)}> <ClientPage /> </HydrationBoundary>;
It seems they're combining Next's native hydration mechanism with TenStacks (another framework) in order to more easily fetch in the browser?
To follow on their WebSocket example where they need to update data of a user card state when a Websocket connection sends data. I don't see what would be the issue here to just use a WebSocket library inside a client component. I imagine it's something you'd have to do to in any other framework, so I don't understand what problem Next.js caused here.
What they're doing screams like a hack and probably the source their issues in this section.
> Being logged in affects the homepage, which is infuriating because the client literally has everything needed to display the page instantly
I'm not sure I understand this part. They mention their app is not static but instead is fully dynamic. Then, how would they avoid NOT showing a loading state in between pages?
> One form of loading state that cannot be represented with the App Router is having a page such as a page like a git project's issue page, and clicking on a user name to navigate to their profile page. With loading.tsx, the entire page is a skeleton, but when modeling these queries with TanStack Query it is possible to show the username and avatar instantly while the user's bio and repositories are fetched in. Server components don't support this form of navigation because the data is only available in rendered components, so it must be re-fetched.
You can use third-party libs to achieve this idea of reusing information from page to another. Example of this is motion's AnimatePresence which allows smooth transitions between 2 react states. Another possibility (of reusing data from an earlier page) is to integrate directly into Next.js new view transitions api: https://view-transition-example.vercel.app/blog <- notice how clicking on a post shows the title immediately
> At work, we just make our loading.tsx files contain the useQuery calls and show a skeleton. This is because when Next.js loads the actual Server Component, no matter what, the entire page re-mounts. No VDOM diffing here, meaning all hooks (useState) will reset slightly after the request completes. I tried to reproduce a simple case where I was begging Next.js to just update the existing DOM and preserve state, but it just doesn't. Thankfully, the time the blank RSC call takes is short enough.
This seems like an artefact of the first issue: trying to combing two different hydration systems that are not really meant to work together?
> Fetching layouts in isolation is a cute idea, but it ends up being silly because it also means that any data fetching has to be re-done per layout. You can't share a QueryClient; instead, you must rely on their monkey-patched fetch to cache the same GET request like they promise.
Perhaps the author is missing how React cache works (https://react.dev/reference/react/cache) and how it can be used within next.js to cache fetches _PER TREE RENDER_ to avoid entirely this problem
> This solution doubles the size of the initial HTML payload. Except it's worse, because the RSC payload includes JSON quoted in JS string literals, which format is much less efficient than HTML. While it seems to compress fine with brotli and render fast in the browser, this is wasteful. With the hydration pattern, at least the data locally could be re-used for interactivity and other pages.
Yes sending data twice is an architecture hurdle required for hydration to work. The idea of reusing that data in other pages was discussed before via things like AnimatePresence.
What's important to note here is that the RSC payload exists at the bottom of the HTML. Since HTML is streamed by default this won't impact Time-to-first-Render. Again, other frameworks need to do this as well (in other ways but still, it needs to happen)
I totally understand the author's frustrations. Next.js isn't perfect, and I also have lots of issues with it. Namely I dislike their intercept/parallel mechanism and setting up ISR/PPR is a nightmare. I just felt like the need to address some of their comments so maybe it can help them?
As a first I would get rid of tanstack since it's fighting against Next.js architecture.
Or yeah just move entirely elsewhere :)
Deleted Comment
Now that I'm back to my normal office coding job, I feel like I'm actually saving less money because I have rent, and general city life to spend money on. It's all about the comforts one is used to.
The story of artists not having enough money is probably about people that are used to too many comforts. I've seen people complain they didn't have money to go by, whilst living in an apartment close to a densely populated city and having a car... get rid of those comforts if you want to make it!
80% of senior candidates I interview now aren’t able to do junior level tasks without GenAI helping them.
We’ve had to start doing more coding tests to weed their skill set out as a result, and I try and make my coding tests as indicative of our real work as possible and the work they current do.
But these people are struggling to work with basic data structures without an LLM.
So then I put coding aside, because maybe their skills are directing other folks. But no, they’ve also become dependent on LLMs to ideate.
That 80% is no joke. It’s what I’m hitting actively.
And before anyone says: well then let them use LLMs, no. Firstly, we’re making new technologies and APIs that LLMs really struggle with even with purpose trained models. But furthermore, If I’m doing that, then why am I paying for a senior ? How are they any different than someone more junior or cheaper if they have become so atrophied ?
Because they know how to talk to the AI. That's literally the skill that differentiates seniors from juniors at this point. And a skill that you gain only by knowing about the problem space and having banged your head at it multiple times.
I think upvoting/downvoting is a crucial aspect to news/information/knowledge. But we've been doing it with just numbers all along. Why not experiment with weights or more complex voting methods? Ex: my reputation is divided in categories - I'm more an expert in history then politics hence my vote towards historical subjects have more weights. Feels like that's the next big step for news. Instead of just another centralized aggregator?
No offense to the cool system and website though
Even his imaginary "snapshot/example driven design tool" (described at the end of the article) seems quite intriguing and thought provoking. I wonder if with AI being so easily accessible nowadays, a retake on this tool can provide something that is actually usable and useful to people?
I'm not sure. It felt like we were moving towards dumb backends that sync automatically towards frontends that would contain most logic. Things like https://localfirstweb.dev/ or https://electric-sql.com/ felt like the future
Writing more server code (as quoting/react-server-components are suggesting) will increase the surface area where errors can occur. Limiting that to either just the server/client feels like a much saner approach
Happy to answer any questions
If the answer is performance, how does Bun achieve things quicker than Node?