I dunno here, is Rails a server application library because I can progressively integrate the different components of its total API, e.g. first use ActiveRecord, then adopt rails-api for the frontend, then adopt ActiveView and Turbolinks for the end-user frontend? Or is there different idea of framework at work here?
> Lifecycle methods like render, componentWillMount, etc are just callbacks that get fired when you render you a component. I don't think any library is immediately graduated to the class of "framework" the moment a callback is added.
But I don't write the code that says when they're called. Backbone is more of a library, for example, since it lets me do all that and it can be progressively integrated into an app too (I can just use models, I can just the views, etc). I mean thats part of why people ran to Ember and Angular when they appeared, they didn't feel like doing all that.
Also see Crank.js: you can emulate React's Component API in Crank, but the reverse is not true. If I'm not writing the code that "turns the gears" per se, then to me I'm using a framework. That framework might have a small API surface, it might have a large API surface. That's how I see it at least. A framework to me is defined by a certain threshold of abstraction.
This is an async task library because I can opt-in and out of, as well as control, or simply replace, the scheduler[0]
I think you misunderstood parent comment. Progressively integrate here means: you have a web page (facebook.com), you want to replace a part of it with react (chat function), you rewrite just that portion of the webpage in react.
You can't get a Sinatra application and replace just one route with rails. Well, you can use ActiveController, which is just an opinionated wrapper around Rack with a lot of sugar, but that wouldn't be rails and you won't get any rails benefits of doing that.
[1] - https://www.dailydot.com/unclick/horizon-forbidden-west-aloy...
EDIT: 4 responses within minutes of each other all pointing to a different game just underlines my "too many to name" comment.
2013 called. It wants it's opinion back.
Then they proceed adding platform dependent code for their vanity projects like Fuchsia along side their other platform dependent code for not vanity projects like ChromeOS and Android.
IMO Dart is only alive because of Flutter, outside of it, it's dead.
The correct way to do so, is to have separate hard-drives for different OS. Then there is zero chance of them stepping on each other.
That was also an era of websites crashing all the damn time - in firefox it was crashing the entire browser.
Chrome was a significantly better browser for a while. Now it's just "why switch?" to your average consumer.
Process per connection is pretty easy to accidentally run into, even at small scale. So now you need to manage another piece of infrastructure to deal with it.
Downtime for upgrades impacts everyone. Just because you're small scale doesn't mean your users don't expect (possibly contractually) availability.
Replication: see point above.
General performance: Query complexity is the other part of the performance equation, and it has nothing to do with scale. Small data (data that fits in RAM) can still be attacked with complex queries that can benefit from things such as clustered index and hints.
Most places I saw this as an issue, are where developers think that by tweaking the number of connections will give them a linear boost in performance. Those are the same people that think adding more writers in RWLock will improve writing performance.
I agree that it's easy to run into and pretty silly concurrency pattern for today's time. At the same time, it's just a thing you need to be aware of when using PostgreSQL and design your service with that in mind.
The issues with IPv6, in my experience come from its relative complexity, compared to IPv4, and also from forgetting to manage it at all, as it often uses different tools, firewalls, e.g. ip6tables vs iptables, or the fact that Ubiquiti EdgeRouters don't expose ANY IPv6 firewall configuration in the GUI at all.
The issue with IPv6 is that links are significantly slower than IPv4 links today.
Customers kept paying Qualcomm for their SoC with ARM designed cores, so once again, Qualcomm had no reason to actually do anything but sit on their patents.
Intel had a similar story, since Sandy Bridge "x86_64" part of CPU barely changed, most of the performance gain was somewhere from better process, more custom instructions (avx2, etc.), higher TDP (since ryzen).
It's not ARM vs x86, it's Apple ARM cores vs everyone elses cores.