https://support.anthropic.com/en/articles/11145838-using-cla...
I started out using Anki to learn French vocabulary. I'd make pairs of cards, with English on one side and French on the other. This started out easy, but became utterly brutal and depressing with several hundred cards in my deck. Too many near synonyms.
I eventually took a hint from Katzumoto's Japanese advice, and started making cloze cards. I'd copy and paste an entire paragraph from an ebook or a web page, and hide just one word. These cards were easy, but also effective.
Then I got lazier.
I'd only hide half a word. Or I'd just boldface a word, and mark the card as a "pass" if I could sort of remember that word in context.
And somehow, these cards actually worked better.
Then I got lazier still. If seeing a card made me grown "Oh, not that card", I'd just delete it. If I missed a card 3 times, I configured Anki to permanently suspend it. If I actually needed to know a word, no worries, I'd see it again soon in a more helpful context. And my French vocabulary continued to grow by leaps and bounds.
I don't think that biggest improvements will come from better spaced repetition algorithms. I suspect the biggest wins will come from improved card formats. And it's surprisingly hard to make a card too easy to be useful.
(Source: 35,000+ Anki reps across three languages.)
I'm testing out a summarization/rephrase feature backed by LLMs that you can try in the demo. In HN fashion I'm trying to build this openly and gather feedback to see what works. I'd like to push this further in the active direction the article mentions with something like a Socratic dialogue mode where you're nudged to re-explain and examine ideas.
If anyone uses this thing/has feedback, let me know. Source is available too [2].
It was very satisfying to learn how transformers worked, to finally be able to turn the obscure glyphs of the research papers into real code, but I think transformers are too big for what I can do on my own computer. The author mentioned that the toy transformer he was building in the final video took 15 minutes to train on his A100 GPU (a $10,000 GPU), and the results weren't even that good; the transformer was spelling words correctly using character level tokens, I guess that's something, but it's not GTP4.
Even so, there were a lot of good tips to pick up along the way. This is a great series that I'm thankful to have. The "Backprop Ninja" video was hard work, you manually calculate the gradients and then compare your calculations against PyTorch. It's great to have instant feedback telling you whether your gradients are correct or not.
These are available to rent per hour at much lower costs. The author mentions this in the video description.
* https://asteriskmag.com/ - they just published the second issue. I really enjoy the topics covered and the quality of writing.
I use it for ad hoc notes and research (e.g. project ideas, an ML-course I'm taking now, random interesting subjects beyond tech) as well as for daily journaling.
I start my day by writing in a "stream of consciousness" app I wrote for myself: https://enso.sonnet.io.
The format is as follows (ca. 20m each morning, 700-800 words):
- 100% unstructured description of my previous day, then
- 3-4 things I found beautiful or interesting, then
- a short TODO list for the day.
Then I just copy past the notes into my new daily note in Obsidian.
This seems to work really well for me. If I was to pick one improvement, it would be Apple Reminders treating multi-line text as separate entries when pasting. But again, it takes just a few seconds, and most of my workflow is just muscle memory at this stage.
PS. I'm thinking about writing an Open AI Whisper powered transcription tool for voice notes in Obsidian. If that's something you'd find useful and would be prepared to pay for, please let me know.
Thanks for sharing this. Love the constraints and how minimal it is.