Content Paint

LLMs

OpenR1-Qwen-7B: Finally, an open recreation of R1

Edited Feb 16, 2025 Hugging Face-led collaboration proves smaller models can punch above their weight with specialized training. A coalition of AI researchers has pulled back the curtain on OpenR1-Qwen-7B—an open-weights language model that replicates the mathematical prowess of China’s cutting-edge DeepSeek-R1 through collaborative engineering. The project demonstrates

Tiny but mighty: Open-source DeepScaleR-1.5B-Preview challenges big AI’s efficiency assumptions

Edited Feb 16, 2025 A new open-source language model is turning heads not for its size, but for what it achieves without the computational heft of its predecessors. DeepScaleR-1.5B-Preview, a 1.5-billion-parameter model developed by the Agentica Project, claims to outperform OpenAI’s proprietary O1-Preview in specialized reasoning tasks—

Your link has expired. Please request a new one.
Your link has expired. Please request a new one.
Your link has expired. Please request a new one.
Great! You've successfully signed up.
Great! You've successfully signed up.
Welcome back! You've successfully signed in.
Success! You now have access to additional content.