Skip to main content
Paperethicsv1.0

Sociotechnical Safety Evaluation of Large Language Models

by DeepMind / Multiple Institutions · free · Last verified 2026-03-17

This paper argues that LLM safety evaluations must account for sociotechnical contexts—who uses a system, in what social settings, and with what deployment constraints—rather than treating safety as a purely technical property of the model. It proposes a framework integrating stakeholder analysis, deployment context, and systemic risk assessment into safety evaluation pipelines.

https://arxiv.org/abs/2310.11986
C+
C+Average
Adoption: C+Quality: AFreshness: B+Citations: C+Engagement: F

Specifications

License
Open Access
Pricing
free
Capabilities
safety-evaluation, sociotechnical-analysis, risk-assessment, stakeholder-analysis
Integrations
Use Cases
ai-safety, responsible-ai, deployment-risk-assessment, policy-development
API Available
No
Tags
ethics, safety, evaluation, sociotechnical, red-teaming, llm
Added
2026-03-17
Completeness
100%

Index Score

53.7
Adoption
55
Quality
86
Freshness
78
Citations
58
Engagement
0

Put AI to work for your business

Deploy this paper alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.

Explore the full AI ecosystem on Agents as a Service