Skip to main content
brand
context
industry
strategy
AaaS
PaperAI Ethics & Safetyv1.0

Representation Engineering: A Top-Down Approach to AI Transparency

by Center for AI Safety / UC Berkeley · free · Last verified 2026-03-17

Representation Engineering (RepE) is a top-down AI transparency technique for interpreting and controlling Large Language Models. It uses linear probes on activation differences from contrastive prompts to identify and manipulate high-level concepts like truthfulness and emotion without needing to retrain or fine-tune the model.

https://arxiv.org/abs/2310.01405
B
BAbove Average
Adoption: B+Quality: A+Freshness: BCitations: B+Engagement: F

Specifications

License
Open Access
Pricing
free
Capabilities
Reading high-level conceptual representations, Controlling model behavior without fine-tuning, Steering model outputs (e.g., towards honesty), Identifying abstract concepts like power-seeking, Enhancing model interpretability and transparency, Improving AI safety and alignment, Detecting and modifying model-internal states, Activation vector manipulation
Integrations
Use Cases
[object Object], [object Object], [object Object], [object Object]
API Available
No
Tags
interpretability, transparency, representation-engineering, ai-alignment, model-control, llm-safety, activation-steering, ai-ethics, mechanistic-interpretability
Added
2026-03-17
Completeness
0.9%

Index Score

65.2
Adoption
70
Quality
91
Freshness
68
Citations
76
Engagement
0

Need this tool deployed for your team?

Get a Custom Setup

Explore the full AI ecosystem on Agents as a Service