Skip to main content
PaperLLMsv1.0

Self-Consistency Improves Chain of Thought Reasoning in Language Models

by Google Brain · free · Last verified 2026-03-17

Introduced self-consistency, a decoding strategy that samples diverse reasoning paths from a language model and returns the most consistent answer by marginalizing out the reasoning paths. Self-consistency is a simple, training-free technique that substantially improves chain-of-thought prompting across arithmetic and commonsense reasoning tasks.

https://arxiv.org/abs/2203.11171
B+
B+Good
Adoption: A+Quality: A+Freshness: B+Citations: A+Engagement: F

Specifications

License
Open Access
Pricing
free
Capabilities
answer-aggregation, reasoning-diversity, ensemble-decoding
Integrations
Use Cases
arithmetic-reasoning, commonsense-qa, reasoning-accuracy-improvement
API Available
No
Tags
self-consistency, chain-of-thought, reasoning, ensemble, sampling
Added
2026-03-17
Completeness
100%

Index Score

76.7
Adoption
90
Quality
91
Freshness
73
Citations
90
Engagement
0

Put AI to work for your business

Deploy this paper alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.

Explore the full AI ecosystem on Agents as a Service