I recently needed to classify thousands of documents according to some custom criteria. I wanted to use LLM classification from these thousands of documents to train a faster, smaller BERT (well, ModernBERT) classifier to use across millions of documents.
For my task, Llama 3.3 was still the best local model I could run. I tried newer ones (Phi4, Gemma3, Mistral Small) but they produced much worse results. Some larger local models are probably better if you have the hardware for them, but I only have a single 4090 GPU and 128 GB of system RAM.
For my task, Llama 3.3 was still the best local model I could run. I tried newer ones (Phi4, Gemma3, Mistral Small) but they produced much worse results. Some larger local models are probably better if you have the hardware for them, but I only have a single 4090 GPU and 128 GB of system RAM.