NVIDIA and MIT Release Lightning OPD Framework, Boosting Model Distillation Efficiency 4x While Eliminating GPU Memory Issues

According to reports, NVIDIA and MIT researchers released Lightning OPD (Offline On-Policy Distillation), a new post-training framework for large language models that eliminates the need to keep a teacher model running during training. By precomputing the teacher model’s log-probabilities offline, the framework improves training efficiency by 4x while freeing all GPU resources for student model training.

In testing on 8 NVIDIA H100 GPUs, Lightning OPD successfully distilled Qwen3-30B-A3B-Base (a 30-billion parameter MoE model) and achieved 71.0 on the AIME 2024 benchmark, whereas standard OPD ran out of memory on the same hardware. For the smaller Qwen3-8B model, the framework required only 30 GPU hours to reach 69.9 points.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments