Skip to main content
PaperLLMsv1.0

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

by Google AI · free · Last verified 2026-03-17

Introduced BERT, a bidirectional Transformer pre-trained on masked language modeling and next sentence prediction. Established the pretrain-then-fine-tune paradigm that dominated NLP for years and achieved state-of-the-art on 11 NLP benchmarks.

https://arxiv.org/abs/1810.04805
A
AGreat
Adoption: A+Quality: A+Freshness: CCitations: A+Engagement: F

Specifications

License
Apache 2.0
Pricing
free
Capabilities
text-classification, question-answering, named-entity-recognition, pre-training
Integrations
huggingface-transformers
Use Cases
text-classification, question-answering, sentiment-analysis, ner
API Available
No
Tags
bert, pre-training, bidirectional, nlp, foundational, fine-tuning
Added
2026-03-17
Completeness
100%

Index Score

82.8
Adoption
97
Quality
96
Freshness
40
Citations
99
Engagement
0

Put AI to work for your business

Deploy this paper alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.

Explore the full AI ecosystem on Agents as a Service