Skip to main content
ModelLLMsv32B-Preview

QwQ-32B

by Alibaba / Qwen Team · open-source · Last verified 2026-03-17

QwQ-32B is Alibaba's 32 billion parameter reasoning-focused language model that employs deep chain-of-thought reasoning to tackle complex mathematical, scientific, and logical problems. It achieves performance competitive with much larger models on reasoning benchmarks, demonstrating that focused reasoning training can be highly parameter-efficient.

https://huggingface.co/Qwen/QwQ-32B-Preview
B
BAbove Average
Adoption: BQuality: A+Freshness: ACitations: B+Engagement: F

Specifications

License
Apache 2.0
Pricing
open-source
Capabilities
mathematical-reasoning, logical-reasoning, chain-of-thought, code-reasoning, scientific-problem-solving
Integrations
Hugging Face, Ollama, vLLM
Use Cases
complex math problem solving, scientific research assistance, competitive programming, logical puzzle solving
API Available
Yes
Parameters
~32B
Context Window
32K
Modalities
text
Training Cutoff
2024
Tags
reasoning, qwen, alibaba, chain-of-thought, math
Added
2026-03-17
Completeness
100%

Index Score

62
Adoption
65
Quality
90
Freshness
82
Citations
72
Engagement
0

Explore the full AI ecosystem on Agents as a Service