Skip to main content
Papertrainingv1.0

LoRA: Low-Rank Adaptation of Large Language Models

by Microsoft Research · free · Last verified 2026-03-17

Introduces LoRA, which freezes pretrained model weights and injects trainable low-rank decomposition matrices into Transformer layers. Reduces trainable parameters by 10,000× and GPU memory by 3× with no inference latency overhead, enabling efficient LLM fine-tuning.

https://arxiv.org/abs/2106.09685
B+
B+Good
Adoption: A+Quality: A+Freshness: BCitations: AEngagement: F

Specifications

License
MIT
Pricing
free
Capabilities
parameter-efficient-fine-tuning, low-rank-adaptation, memory-efficient-training
Integrations
huggingface-peft, huggingface-transformers, axolotl
Use Cases
fine-tuning, domain-adaptation, instruction-tuning
API Available
No
Tags
lora, fine-tuning, low-rank, parameter-efficient, peft, adaptation
Added
2026-03-17
Completeness
100%

Index Score

78.8
Adoption
95
Quality
94
Freshness
62
Citations
88
Engagement
0

Put AI to work for your business

Deploy this paper alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.

Explore the full AI ecosystem on Agents as a Service