DeepSeek‑V3.2: Sparse Attention and Scaled RL Power an Open, Agentic Reasoner
Efficient sparse attention plus large, stabilized RL and synthetic agent tasks push an open LLM to near‑frontier reasoning and agent performance, with a high‑compute variant achieving gold‑medal results.







