Skip to main content
brand
context
industry
strategy
AaaS
BenchmarkLLMsv1.0

MedMCQA

by Pal et al. / IIT Kanpur · free · Last verified 2026-03-17

MedMCQA is a massive multiple-choice question dataset sourced from Indian medical entrance examinations like AIIMS and NEET-PG. It contains over 194,000 questions covering 2,400 healthcare topics, designed to rigorously test a model's breadth of medical knowledge and reasoning abilities across multiple subjects.

https://medmcqa.github.io
B
BAbove Average
Adoption: B+Quality: AFreshness: BCitations: B+Engagement: F

Specifications

License
Apache-2.0
Pricing
free
Capabilities
medical-knowledge-assessment, clinical-reasoning-evaluation, multi-subject-question-answering, domain-specific-language-understanding, llm-performance-benchmarking, medical-fact-retrieval-testing, few-shot-learning-evaluation
Integrations
[object Object], [object Object], [object Object], [object Object]
Use Cases
[object Object], [object Object], [object Object], [object Object], [object Object]
API Available
No
Evaluated Models
gpt-4o, claude-opus-4, gemini-2-5-pro, meditron-70b
Metrics
accuracy
Methodology
194,000+ four-option MCQs covering anatomy, physiology, biochemistry, and clinical subjects. Models are evaluated on the held-out test split without access to explanations. Accuracy is computed per subject and macro-averaged.
Last Run
2026-02-01
Tags
medical, mcq, indian-medical, usmle, multi-subject, question-answering, benchmark, llm-evaluation, healthcare-ai, clinical-reasoning, natural-language-processing
Added
2026-03-17
Completeness
1%

Index Score

65.5
Adoption
72
Quality
86
Freshness
68
Citations
78
Engagement
0

Need this tool deployed for your team?

Get a Custom Setup

Explore the full AI ecosystem on Agents as a Service