Skip to main content
brand
context
industry
strategy
AaaS
Toolopen-source-corev1.0

llama.cpp

by ggerganov · open-source · Last verified 2026-04-24

C/C++ inference engine running quantized LLMs on CPU and consumer GPUs.

https://github.com/ggerganov/llama.cpp
C
CBelow Average
Adoption: C+Quality: B+Freshness: ACitations: CEngagement: F

Specifications

License
Open Source
Pricing
open-source
Capabilities
Integrations
Use Cases
API Available
No
SDK Languages
cpp, python
Deployment
self-hosted, embedded
Rate Limits
N/A (local, hardware-limited)
Data Privacy
Fully local; no data sent externally
Tags
inference, cpu, quantization, c++
Added
2026-04-24
Completeness
60%

Index Score

44
Adoption
50
Quality
70
Freshness
80
Citations
40
Engagement
0

Need this tool deployed for your team?

Get a Custom Setup

Explore the full AI ecosystem on Agents as a Service