Readit News logoReadit News
chilipepperhott commented on A spellchecker used to be a major feat of software engineering (2008)   prog21.dadgum.com/29.html... · Posted by u/Bogdanp
chilipepperhott · 12 days ago
Checking if a word is spelled correctly is easy. It is providing high-quality suggestions that is hard.
chilipepperhott commented on Open models by OpenAI   openai.com/open-models/... · Posted by u/lackoftactics
lukax · 19 days ago
Inference in Python uses harmony [1] (for request and response format) which is written in Rust with Python bindings. Another OpenAI's Rust library is tiktoken [2], used for all tokenization and detokenization. OpenAI Codex [3] is also written in Rust. It looks like OpenAI is increasingly adopting Rust (at least for inference).

[1] https://github.com/openai/harmony

[2] https://github.com/openai/tiktoken

[3] https://github.com/openai/codex

chilipepperhott · 19 days ago
As an engineer that primarily uses Rust, this is a good omen.
chilipepperhott commented on Show HN: Refine – A Local Alternative to Grammarly   refine.sh... · Posted by u/runjuu
ggerganov · a month ago
Gemma 3n (the model used by this app) would run on any Apple Silicon device (even with 8GB RAM).
chilipepperhott · a month ago
Yup, but you're automatically giving up a ton of RAM that could be better used for Slack.
chilipepperhott commented on Show HN: Refine – A Local Alternative to Grammarly   refine.sh... · Posted by u/runjuu
sarmadgulzar · a month ago
I see that you're using gemma3n which is a 4B parameter model and utilizes around 3GB RAM. How do you handle loading/offloading the model into the RAM? Or is it always in the memory as long as the app is running?
chilipepperhott · a month ago
I can see this as a major issue. If you start using this for grammar checking, you're basically subtracting 3GB of RAM from your system.

u/chilipepperhott

KarmaCake day420April 12, 2022View Original