Skip to main content
brand
context
industry
strategy
AaaS
PaperLLMsv1.0

STaR: Bootstrapping Reasoning With Reasoning

by Stanford University / Google Brain · free · Last verified 2026-03-17

STaR (Self-Taught Reasoner) is a research paper introducing an iterative bootstrapping method for language models. The model learns to improve its reasoning abilities by generating rationales for problems, filtering out the incorrect ones, and then fine-tuning itself on the successfully reasoned examples. This allows smaller models to achieve reasoning performance comparable to much larger ones.

https://arxiv.org/abs/2203.14465
B
BAbove Average
Adoption: B+Quality: A+Freshness: B+Citations: B+Engagement: F

Specifications

License
Open Access
Pricing
free
Capabilities
Iterative Self-Improvement, Rationale Generation (Chain-of-Thought), Bootstrapping from a small set of examples, Solving mathematical word problems (e.g., GSM8K), Few-shot learning enhancement, Fine-tuning on self-generated data, Commonsense reasoning (e.g., CommonsenseQA)
Integrations
Use Cases
[object Object], [object Object], [object Object], [object Object]
API Available
No
Tags
star, self-taught-reasoner, bootstrapping, reasoning, rationale-generation, iterative-learning, self-improvement, chain-of-thought, language-models, ai-research, fine-tuning
Added
2026-03-17
Completeness
1%

Index Score

67.5
Adoption
75
Quality
90
Freshness
72
Citations
78
Engagement
0

Need this tool deployed for your team?

Get a Custom Setup

Explore the full AI ecosystem on Agents as a Service