Docker ML Deployment
by AaaS · open-source · Last verified 2026-03-01
Containerizes ML models and inference servers with optimized Docker images for production deployment. Includes multi-stage builds for minimal image size, GPU support configuration, health checks, and docker-compose setups for full inference stacks.
https://aaas.blog/script/docker-ml-deployment ↗C+
C+—Average
Adoption: BQuality: B+Freshness: B+Citations: C+Engagement: F
Specifications
- License
- MIT
- Pricing
- open-source
- Capabilities
- multi-stage-builds, gpu-configuration, health-checks, compose-setup, image-optimization
- Integrations
- docker, vllm, torch, fastapi
- Use Cases
- model-containerization, inference-server-deployment, development-environment, cloud-deployment
- API Available
- No
- Language
- python
- Dependencies
- docker, vllm, torch, fastapi, uvicorn
- Environment
- Python 3.11+ with Docker and NVIDIA Container Toolkit
- Est. Runtime
- 5-15 minutes for build; deployment 1-2 minutes
- Tags
- script, automation, docker, deployment, containerization
- Added
- 2026-03-17
- Completeness
- 100%
Index Score
54.2Adoption
64
Quality
78
Freshness
76
Citations
52
Engagement
0