Deploy Docker Containers with AZIN

All guides
Docker
Deploy Guides·1 min read

Deploy Docker Containers with AZIN

deploydockercontainers

AZIN auto-detects your Dockerfile, builds the image, and deploys to your own GCP (GKE Autopilot) — or lttle.cloud (in early access) with scale-to-zero. AWS and Azure are on our roadmap. No cluster YAML. No gcloud flags. Push code, pick a cloud, go live.

#How it works

Connect a Git repo that contains a Dockerfile. AZIN detects it automatically via Railpack, builds the image, and deploys to the cloud you choose. Every push to your main branch triggers a new deploy. PRs get preview environments.

The flow:

  • Connect your GitHub, GitLab, or Bitbucket repo
  • AZIN detects the Dockerfile at the root of your project
  • Choose a deployment target: your GCP or lttle.cloud in early access (AWS and Azure on the roadmap)
  • Push code — AZIN builds and deploys automatically
  • Each PR gets its own preview environment with a unique URL

No docker build and docker push scripts. No Artifact Registry to configure. No Kubernetes manifests.

#Deployment config

A complete azin.yaml for a Docker-based web app with Postgres:

# azin.yaml
name: my-api
services:
  web:
    build:
      type: dockerfile
      path: ./Dockerfile
    port: 8000
    cloud: gcp
    region: europe-west4
    scaling:
      min: 1
      max: 10
      target_cpu: 70
    healthcheck:
      path: /health
      interval: 30
    env:
      DATABASE_URL: "@db.url"
      REDIS_URL: "@cache.url"
  db:
    type: postgres
    plan: production
    cloud: gcp
    region: europe-west4
  cache:
    type: redis
    cloud: gcp
    region: europe-west4

The @db.url and @cache.url references are service bindings — AZIN injects connection strings automatically. No hardcoded credentials.

Deploy with one command (CLI is on our roadmap — deployments today are triggered via Git push or the AZIN dashboard):

azin deploy

#Multi-stage Dockerfiles

AZIN handles multi-stage builds natively. This is the recommended pattern for production Docker images — separate build dependencies from the runtime to reduce image size and attack surface.

A Node.js API with a multi-stage build:

# Build stage
FROM node:22-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
 
# Runtime stage
FROM node:22-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
 
EXPOSE 8000
HEALTHCHECK --interval=30s --timeout=3s \
  CMD wget -qO- http://localhost:8000/health || exit 1
CMD ["node", "dist/index.js"]

A Python FastAPI app:

FROM python:3.12-slim AS builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir --prefix=/install -r requirements.txt
 
FROM python:3.12-slim
WORKDIR /app
COPY --from=builder /install /usr/local
COPY . .
 
EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

Both examples follow Docker best practices as of February 2026: minimal base images (alpine, slim), multi-stage separation, explicit HEALTHCHECK, and no dev dependencies in the final image.

#Multi-cloud deployment

The same Dockerfile deploys to any supported cloud. Change the cloud and region fields in azin.yaml:

# Deploy to GCP (available today)
services:
  web:
    build:
      type: dockerfile
    cloud: gcp
    region: us-central1
# Same app, different GCP region
services:
  web:
    build:
      type: dockerfile
    cloud: gcp
    region: europe-west4
# Same app, deploy to lttle.cloud (scale-to-zero, early access)
services:
  web:
    build:
      type: dockerfile
    cloud: lttle
    scaling:
      min: 0
      max: 10

On GCP, AZIN provisions GKE Autopilot pods — Google's serverless container platform with the first cluster free, billed only for running pods. On lttle.cloud (in early access), microVMs with sub-10ms cold starts that scale to zero when idle. AWS and Azure support is on our roadmap. Your Dockerfile stays the same across all of them.

#Connected services

Docker containers rarely run alone. Add managed databases, caches, and storage alongside your container in the same azin.yaml:

  • PostgreSQL — managed Cloud SQL on GCP, or lttle.cloud Postgres (in early access)
  • Redis — managed Memorystore on GCP
  • Object storage — GCS-compatible buckets in your cloud account

AZIN provisions these in your cloud account and injects connection strings as environment variables. No manual Cloud SQL console setup. No VPC peering configuration.

#Scaling

Docker containers on AZIN scale based on CPU utilization by default. Set min and max replicas, and AZIN handles the rest:

scaling:
  min: 2
  max: 20
  target_cpu: 70

On lttle.cloud (in early access), set min: 0 for true scale-to-zero. Your container shuts down when idle and starts in under 10ms when traffic arrives. Development and staging environments cost nothing when unused.

#Why AZIN for Docker

Your containers, your cloud. Docker containers deployed through AZIN run in your own GCP account today — AWS and Azure are on our roadmap. You own the infrastructure. No vendor lock-in — your Dockerfiles are standard, portable anywhere.

One config, any cloud. The same azin.yaml deploys to any supported cloud. Switch regions or move between supported clouds by changing one line. No rewriting deployment scripts or learning a new cloud's container service.

No cluster overhead. Unlike platforms that deploy Docker containers onto managed Kubernetes (where the underlying cloud cluster can cost ~$225/mo based on typical AWS EKS configurations, as of February 2026, before a single container runs), AZIN deploys to GKE Autopilot — first cluster free, pay only for running pods.

Scale-to-zero on lttle.cloud. Dev and staging containers scale to zero when idle. Sub-10ms cold starts via stateful microVM snapshots. Pay nothing when nothing runs. lttle.cloud is currently in early access.

#Docker on other platforms

For context, here's how Docker deployment works elsewhere as of February 2026:

Railway auto-detects Dockerfiles and builds them. Deploy experience is excellent. But containers run on Railway's infrastructure only — no BYOC below Enterprise tier. Railway now offers app sleeping (scale-to-zero) for idle services.

Render builds from Dockerfiles with zero-downtime deploys. Docker images must be under 10 GB compressed. All containers run on Render infrastructure — no BYOC at any tier.

Fly.io deploys anything in a Docker container to Firecracker microVMs across 18 regions. Strong for global distribution. Requires more operational knowledge — you manage fly.toml, Machines API, and volume configuration directly.

AZIN combines auto-detection and zero-config Docker builds with the ability to choose where those containers actually run.

Deploy Guides

Deploy Django with AZIN

Zero-config Django detection. Gunicorn, collectstatic, PostgreSQL, and migrations handled automatically.

Deploy on private infrastructure

Managed AI environments with built-in isolation. Zero DevOps required.