I’m excited to share golf.vim, a Vim plugin that delivers daily editing challenges right inside your editor—like a crossover between VimGolf and Wordle. Each day (or on-demand by difficulty), you get a short puzzle where you aim to match a target text in the fewest keystrokes possible. Once you succeed, your strokes and time are submitted to a leaderboard, where you can compare your scores (Eagle, Birdie, Bogey, etc.) with others.
Key Highlights:
- Daily Challenges: Automatically fetch a fresh puzzle each day, or select challenges by difficulty, date, or tag.
- Leaderboards: After completing each puzzle, you will see the top 5 shortest keylogs from start to puzzle completion.
- Scoring: Each puzzle has a par; your final result could be an Eagle, Birdie, Par, Bogey, or worse.
- Built in Public: The community suggested extra commands like :Golf easy and more flexible scoreboard displays, and helped shape the MVP.
Repo: https://github.com/vuciv/golf.vim
Check out the README for setup instructions, screenshots, and details. I’d love your feedback—especially if you have challenge ideas or feature suggestions. Enjoy, and happy golfing!
1. (predictability) Games like to have a clear arc and tend to use at least some of their NPCs to move that forward, it is harder to do this with a model that could make a choice you don't predict. They tend to have a set of items, quests, what-have-you that you need boundaries around.
2. (testing) Games like to test like crazy before launch, at least the AAA ones, so their QA folks just don't like a model that can have infinite responses/variants. Many then drop to a skeletal crew for maintenance and improvements after launch, where with ML models you actually need to keep improving the model, finding long tail bugs as more players interact with the system, etc.
3. (cost) Games are usually very cost aware, it's far cheaper to just have a set human-written dialogue path, then to run a model, even an offline one. Cheaper in both actual dollar costs if you're talking about a high end LLM service call, and CPU/GPU/memory costs if you're talking an on-box system.
4. (internationalization/localization) AAA games need to launch fast to many languages and locales, using a model for NLP and dialogue management/natural language generation adds added testing costs for each new language, that is just a very cheap translation normally that can be outsourced.
There have been some fun experiments in this space, and I expect to see this improve and become common use in the future, but it will take time and more work on how best to integrate a model into the flow of a game. I do love it for "presence" so talking to NPCs feels more human-like.
Nick and the crew at AI Dungeon (and related) have always done some interesting work in this space, trying out games where AI can be used in interesting ways.
1. I think people assume you have one LLM per character, but I think if you had specialized ones for each quest, item, etc.., this would actually work quite well.
3. I actually think if you cached responses under certain conditions, costs can be saved significantly. This would require quite a robust context, though, to still feel dynamic.