LoRA Library
by Hugging Face · free · Last verified 2026-03-17
The LoRA Library, integrated within Hugging Face's PEFT (Parameter-Efficient Fine-Tuning) package, provides tools to create, share, and use LoRA adapters. It allows for the efficient customization of large pre-trained models by training only a small number of new weights, drastically reducing computational costs and storage requirements compared to full fine-tuning.
https://huggingface.co/docs/peft ↗B
B—Above Average
Adoption: B+Quality: AFreshness: ACitations: B+Engagement: F
Specifications
- License
- Apache-2.0
- Pricing
- free
- Capabilities
- Low-Rank Adaptation (LoRA) training, Adapter loading from Hugging Face Hub, Merging multiple adapters into a single model, Support for various PEFT methods (e.g., Prefix Tuning, P-Tuning), Integration with 8-bit and 4-bit quantization for further memory reduction, Dynamic adapter loading and switching, Compatibility with Transformers and Diffusers libraries, Fine-tuning specific model layers or modules
- Integrations
- [object Object], [object Object], [object Object], [object Object], [object Object]
- Use Cases
- [object Object], [object Object], [object Object], [object Object]
- API Available
- Yes
- SDK Languages
- python
- Deployment
- self-hosted
- Rate Limits
- N/A (open-source)
- Data Privacy
- Self-hosted, user-managed
- Tags
- lora, adapters, model-hub, fine-tuning, peft, parameter-efficient-fine-tuning, hugging-face, model-customization, transfer-learning, llm, diffusion-models
- Added
- 2026-03-17
- Completeness
- 0.6%
Index Score
63.1Adoption
72
Quality
84
Freshness
85
Citations
70
Engagement
0