Skip to main content
ModelComputer Visionvv1

Shap-E

by OpenAI · open-source · Last verified 2026-03-17

Shap-E is OpenAI's improved 3D generation model that generates implicit neural representations of 3D objects (which can be rendered as textured meshes or NeRFs) from text or image prompts, producing richer geometry and appearance than its predecessor Point-E. It encodes 3D assets into a compact latent space and trains a diffusion model over that space, enabling more coherent shape-and-texture output.

https://github.com/openai/shap-e
C+
C+Average
Adoption: C+Quality: B+Freshness: C+Citations: B+Engagement: F

Specifications

License
MIT
Pricing
open-source
Capabilities
text-to-3d, image-to-3d, nerf-generation, textured-mesh-export, implicit-neural-representation
Integrations
huggingface, pytorch
Use Cases
3d-asset-prototyping, game-development-research, product-visualization, research, creative-exploration
API Available
Yes
Parameters
~300M
Context Window
N/A
Modalities
text, image, 3d
Training Cutoff
2023
Tags
3d-generation, text-to-3d, implicit-representation, openai, open-source
Added
2026-03-17
Completeness
100%

Index Score

52.8
Adoption
52
Quality
70
Freshness
56
Citations
72
Engagement
0

Explore the full AI ecosystem on Agents as a Service