LoRA Library
by Hugging Face · open-source · Last verified 2026-03-17
Hugging Face's collection and tooling for LoRA adapters and PEFT methods. Enables sharing, loading, and merging low-rank adaptation weights for efficient model customization without full retraining.
https://huggingface.co/docs/peft ↗B
B—Above Average
Adoption: B+Quality: AFreshness: ACitations: B+Engagement: F
Specifications
- License
- Apache-2.0
- Pricing
- open-source
- Capabilities
- lora-training, adapter-merging, parameter-efficient-tuning, model-sharing, multi-adapter
- Integrations
- hugging-face, axolotl, unsloth
- Use Cases
- efficient-fine-tuning, adapter-sharing, multi-task-models, model-customization
- API Available
- Yes
- SDK Languages
- python
- Deployment
- self-hosted
- Rate Limits
- N/A (open-source)
- Data Privacy
- Self-hosted, user-managed
- Tags
- lora, adapters, model-hub, fine-tuning
- Added
- 2026-03-17
- Completeness
- 100%
Index Score
63.1Adoption
72
Quality
84
Freshness
85
Citations
70
Engagement
0