CRUD is good, except for maybe U and D, which are the proverbial erasers in "Accountants don't use erasers or they end up in jail".
It's all fun and games in the greenfield happy path. Do the wrong modification to someone's bank account? Just update the code so you no longer do the wrong modifications. Leave the bad data in-place. Or if you want to fix the bad data, maybe reach out to the customer to ask what his balance should be. Or check the dev-logs, you do make your dev-logs a better source-of-truth than the database, don't you? Once you 'know' what the balance should be, just use your CRUD operation to set things right, aka "create money".
I agree with the article that exposing R and U interfaces on all entities is a completely natural, human way of think about it. It allows for completely intuitive patterns like "check-then-act" and "read-modify-write", (which are also the names of race conditions.)
EDIT:
I forgot to comment on the obvious fallback here. If you really screw things up, it might be possible to restore your database to an earlier state, and just disappear all the money which moved through the system after the database snapshot was created.
> CRUD is good, except for maybe U and D, which are the proverbial erasers in "Accountants don't use erasers or they end up in jail".
And people storing other people's personal data end up with fines if they don't remove the data on request. The whole situation got so bad, that timescale had several bugs which prevented U and D of single records, you could only delete a shard. At least C and R is fast.
The statement that everything boils down to CRUD is not wrong, but also not very useful. Whether you should think of some operation in CRUD terms or not depends on the level of abstraction you are thinking in. Create, Read, Update, and Delete are mostly technical terms that might not have a well defined or understood meaning in a certain domain.
Therefore, I would suggest to use "CRUD" terminology mostly the more technical parts of the application (e.g. some adapter to communicate with a database) and to use business terms (from the "ubiquituos language", as it is called in domain-driven design) otherwise.
I once had a coworker argueing against DDD ideas with the killer argument that it would be only "CRUD". But it was in no way useful to think about the problem in these terms. Later it turned out that we had quite some business logic and that "CRUD" wouldn't have been very helpful to express that.
CRUD boils down to, Write (for the first time), Read, Write (not the first time) and Write (overwrite the thing or the pointer to it with zeroes or something).
So, really, all computing boils down to a Turing machine. No need to learn any other technology.
Seeing a program written in the DDD way can be enlightening. Almost all programmers these days start with databases as a given. So they're writing code that in inseparable from the database and it's no wonder it all looks like CRUD. But a well architected program won't look like that, it will look like whatever problem it's trying to solve. Even something dead basic like a blog which is often the CRUD app tutorial shouldn't be about CRUD, it should be about "posting", "editing", "drafts", "publishing" etc.
You speak from my soul! An often underestimated fact is the future growth in features and complexity. Following the CRUD way this might easily end up in a mess, where you can hardly see domain concepts in all the technical code.
Hmmm ... it seems that CRUD has become synonymous with "forms that look like the database into which they write stuff", possibly because RoR and Django made "scaffolding" the admin interface so popular.
Of course everything that works with a database is CRUD because what else can you do with a database apart from create, retrieve, update and delete data?
I think that if you're doing your job right in interface design the structure of your database shouldn't be immediately apparent to the user. The database design process and the interface design process should be completely separate.
I think it's true that CRUD is a universal way to compute stuff, since you can represent basically any state state machine (plus memory) as a read/process/write loop. But sometimes you have better primitives available to you, like queues or pub/sub systems, even if technically you could implement those as CRUD under the hood if you wanted to.
I'm not sure I'm following the leap, here. Yes, a background process can be seen as CRUD, as you create it and all that. But it's what's creating it that's the problem with many frameworks. If I do a POST to /user, I don't also want to do a POST to /background-tasks/send-sign-up-email. I want both those to happen due to my request to sign up a new user.
All business logic must happen somewhere. In a CRUD framework, much of that is just pushed to the edges of those consuming the CRUD API, in order to keep the CRUD part clean. A trade off I'm not sure I agree with.
This is a valuable exercise in a sense that it demonstrates how a mental model or a design pattern can be adapted to a problem. There can be reasons for that, like e.g. with asynchronous processing where CRUD adds value by offering consistent representation of processing state. What is important, is that very often many different representations of domain exist and it always makes sense to look at the problem from different angles and choose the most beneficial one.
The article says: everything that is CRUD is CRUD.
That's true. But NOT everything is CRUD.
Validation is not CRUD (you do not want your DB to validate your data in most cases -- and even whet the DB does validation: the V is not in CRUD). Form submissions are not CRUD. Queues (AMQP, SQS, pub-sub) are notoriously not CRUD. Scaling you cluster is not CRUD. Deploying your software is not CRUD.
It's all fun and games in the greenfield happy path. Do the wrong modification to someone's bank account? Just update the code so you no longer do the wrong modifications. Leave the bad data in-place. Or if you want to fix the bad data, maybe reach out to the customer to ask what his balance should be. Or check the dev-logs, you do make your dev-logs a better source-of-truth than the database, don't you? Once you 'know' what the balance should be, just use your CRUD operation to set things right, aka "create money".
I agree with the article that exposing R and U interfaces on all entities is a completely natural, human way of think about it. It allows for completely intuitive patterns like "check-then-act" and "read-modify-write", (which are also the names of race conditions.)
EDIT: I forgot to comment on the obvious fallback here. If you really screw things up, it might be possible to restore your database to an earlier state, and just disappear all the money which moved through the system after the database snapshot was created.
If you want to do CRUD but without the headaches of U and D you can do this:
https://en.wikipedia.org/wiki/Bitemporal_modeling
It may seem scary at first and requires more thought, but the benefits are very real, even if the history is not exposed to users in any way.
We're all pretty keen for it. Looks great!
And people storing other people's personal data end up with fines if they don't remove the data on request. The whole situation got so bad, that timescale had several bugs which prevented U and D of single records, you could only delete a shard. At least C and R is fast.
Therefore, I would suggest to use "CRUD" terminology mostly the more technical parts of the application (e.g. some adapter to communicate with a database) and to use business terms (from the "ubiquituos language", as it is called in domain-driven design) otherwise.
I once had a coworker argueing against DDD ideas with the killer argument that it would be only "CRUD". But it was in no way useful to think about the problem in these terms. Later it turned out that we had quite some business logic and that "CRUD" wouldn't have been very helpful to express that.
So, really, all computing boils down to a Turing machine. No need to learn any other technology.
Of course everything that works with a database is CRUD because what else can you do with a database apart from create, retrieve, update and delete data?
I think that if you're doing your job right in interface design the structure of your database shouldn't be immediately apparent to the user. The database design process and the interface design process should be completely separate.
All business logic must happen somewhere. In a CRUD framework, much of that is just pushed to the edges of those consuming the CRUD API, in order to keep the CRUD part clean. A trade off I'm not sure I agree with.
That's true. But NOT everything is CRUD.
Validation is not CRUD (you do not want your DB to validate your data in most cases -- and even whet the DB does validation: the V is not in CRUD). Form submissions are not CRUD. Queues (AMQP, SQS, pub-sub) are notoriously not CRUD. Scaling you cluster is not CRUD. Deploying your software is not CRUD.
I could go on...