Skip to main content
ModelLLMsv6.0

RWKV-6

by RWKV Foundation · open-source · Last verified 2026-03-17

Sixth generation of the RWKV architecture combining transformer-level quality with RNN efficiency for linear-time inference. Enables constant memory usage regardless of sequence length, making it ideal for resource-constrained and streaming applications.

https://wiki.rwkv.com
D
DPoor
Adoption: DQuality: BFreshness: C+Citations: CEngagement: F

Specifications

License
Apache 2.0
Pricing
open-source
Capabilities
text-generation, linear-time-inference, constant-memory, streaming-generation, multilingual
Integrations
huggingface, rwkv-runner, ollama
Use Cases
edge-deployment, streaming-applications, resource-constrained-inference, real-time-generation
API Available
No
Parameters
14B
Context Window
Unlimited (linear RNN)
Modalities
text
Training Cutoff
Mid 2024
Tags
llm, open-source, rnn-architecture, linear-attention, efficient
Added
2026-03-17
Completeness
85%

Index Score

34.8
Adoption
32
Quality
60
Freshness
55
Citations
40
Engagement
0

Explore the full AI ecosystem on Agents as a Service