Later edit: I should have said "Andy Matuschak and Michael Nielson's".
Later edit: I should have said "Andy Matuschak and Michael Nielson's".
To find a shared deck, I usually go here: https://ankiweb.net/shared/decks
Search for "rust".
You'll find two decks with 550-560 cards.
The older one was the original and whoever created it did most of the work and should be blessed by the heavens.
The newer one took the older one, and replaced the screenshots of code with markdown equivalents so they could be rendered by Anki while saving memory. You can see this in the difference in the number of images between the two decks. This is the one I'd download and use.
A use case I've found is if you can find a deck that corresponds to a book you're reading.
I found a deck for the Rust book and it's structured such that you can see cards about things in the order you read about them. You simply read the book as usual, learning from your reading and entering code into a terminal as instructed, and then test you understanding with the cards.
When you end up reviewing older cards, you end up getting the benefits of putting them in long term memory but you also get the opportunity to make more connections as you revisit concepts which has its own benefits for deepening understanding.
I've found this makes reading the book 10x more effective. I get so much more out of it.
This all depends on having a source from which you're learning and the deck is just for testing understanding.
But yes anytime you're using Anki to learn/understand instead of to remember, you're likely misusing it. Anki is a tool for memory.
But I left macOS because of the ever worsening lockdown, Apple constantly messing with the design without a way back and the move towards iOSisms. I'm glad I left when I see the sorry state it is in now.
I use KDE now which gives me a world of customisation options. I don't even have to use the themes or plugins to make it work the way I want to, which is quite different from the default.
Personally, I hated databricks, it caused endless pain. Our org has less than 10TB of data and so it's overkill. Good ol' Postgres or SQL Server does just fine on tables of a few hundred GB, and bigquery chomps up 1TB+ without breaking a sweat.
Everything in databricks - everything - is clunky and slow. Booting up clusters can take 15 minutes whereas something like bigquery is essentially on-demand and instant. Data ETL'd into databricks usually differs slightly from its original source in subtle but annoying ways. Your IDE (which looks like jupyter notebook, but is not) absolutely suck (limited/unfamiliar keyboard shortcuts, flakey, can only be edited in browser), and you're out of luck if you want to use your favorite IDE, vim etc.
Almost every databricks feature makes huge concessions on the functionality you'd get if you just used that feature outside of databricks. For example databricks has it's own git-like functionality (which is the 5% of git that gets most used, but no way to do the less common git operations).
My personal take is databricks is fine for users who'd otherwise use their laptop's computer/memory - this gets them an environment where they can access much more, at about 10x the cost of what you'd pay for the underlying infra if you just set it up yourself. Ironically, all the databricks-specific cruft (config files, click ops) that's required to get going will probably be difficult for that kind of user anyway, so it negates its value.
For more advanced users (i.e. those that know how to start an ec2 or anything more advanced), databricks will slow you down and be endlessly frustrating. It will basically 2-10x the time it takes to do anything, and sap the joy out of it. I almost quit my job of 12 years because the org moved to databricks. I got permission to use better, faster, cheaper, less clunky, open-source tooling, so I stayed.
> I often defaulted to dumping notes into chat apps like Slack or iMessage
What makes you think people think differently about this app?
If people wanted all these features they would already all be covered by Apple Notes (Including the quick note feature, included in the OS when you mouse into the bottom right corner of your screen) but for free, encrypted and synced to all devices.
Double-entry bookkeeping was from its inception an error-correction code that could be calculated by hand.
Modern databases contain much more powerful error correction methods in the form of transactional commits, so from a pure technical point, double-entry bookkeeping is no longer needed at all; that's why programmers have a hard time understanding why it's there: for us, when we store a value in the DB, we can trust that it's been recorded correctly and forever as soon as the transaction ends.
The thing is, cultural accounting still relies on the concepts derived from double entry bookkeeping as described in the article; all those assets and debts and equity are still used by the finances people to make sense of the corporate world, so there's no chance that they'll fall out of use anytime, at least 'in the context of a company' as you out it.
Now would it be possible to create a new accounting system from scratch that relied on DB transactions and didn't depend on double entry? Sure it can, in fact crypto coins is exactly what happens when computer engineers design a money system unrestricted from tradition. But in practical terms it still needs to relate to the traditional categories in order to be understood and used.
```json { "from": "Checking", "to": "Savings", "amount": 100 } ```
This is basically what a crypto ledger does.
But the main reason why we need double entry accounting is that not all accounting entries are transfers. For example, is we are logging a sales, cash increases by $100, and revenue increases by $100. What's the "from" here? Revenue isn't an account where account is taken from, it is the "source" of the cash increase. So something like the following doesn't capture the true semantics of the transaction.
```json { "from": "Revenue", "to": "Cash", "amount": 100 } ```
Instead, in accounting, the above transaction is captured as the following.
```json { "transaction": "Sale", "entries": [ { "account": "Cash", "debit": 100, "credit": null }, { "account": "Revenue", "debit": null, "credit": 100 } ] } ```
It gets worse with other entries like:
- Depreciation: Nothing moves. You're recognizing that a truck is worth less than before and that this consumed value is an expense. - Accruals: Recording revenue you earned but haven't been paid for yet. No cash moved anywhere.
The limitation of ledgers with "from" and "to" is that it assumes conservation of value (something moves from A to B). But accounting tracks value creation, destruction, and transformation, not just movement. Double-entry handles these without forcing a transfer metaphor onto non-transfer events.