DeepSeek-R1 (8B) exhibited 2x more bias than base Llama. We applied targeted unlearning, reduced bias by up to 76% across race/gender/nationality, while maintaining model performance (TruthfulQA: 9.8→9.9, LogiQA: 42.6%→42.5%). Done in ~1hr on consumer hardware. Debiased model on HuggingFace.