Deploy Python with AZIN
Python deployment covers everything from Django web apps to FastAPI microservices to Celery workers to standalone data pipelines. AZIN handles the full range. Railpack auto-detects your Python version, package manager, and framework from standard project files — then deploys to your own GCP account on every git push. No Dockerfile. No Procfile. No infrastructure configuration.
#How AZIN detects Python
AZIN uses Railpack — a zero-config builder that auto-detects your language and framework. When it finds any of these files in your repository, it identifies a Python project:
requirements.txtpyproject.tomlPipfilesetup.py
Python version resolution — Railpack checks these sources in order:
| Source | Example |
|---|---|
.python-version | 3.13 |
runtime.txt | python-3.13 |
| Default | Python 3.13 |
Python 3.13 shipped in October 2024 with an improved interactive interpreter, experimental free-threaded mode, and a JIT compiler. It is the default on AZIN as of February 2026. Railpack supports Python 3.8 through 3.13 — pin your version in .python-version for reproducible builds.
Framework detection — After resolving the Python version, Railpack scans your dependency files to determine which framework you're using and configures the correct production server:
| Framework detected | Production server | Start command |
|---|---|---|
Django (manage.py + django in deps) | Gunicorn | gunicorn myproject.wsgi |
Flask (flask in deps) | Gunicorn | gunicorn myapp:app |
FastAPI (fastapi in deps) | Uvicorn | uvicorn main:app --host 0.0.0.0 |
| Generic Python | Your start script or Procfile | Custom |
No framework? Railpack falls back to your start script in pyproject.toml, a Procfile, or the RAILPACK_START_CMD environment variable. This covers data scripts, CLI tools, queue consumers, and any Python process that isn't a web framework.
#Framework support
AZIN supports every major Python web framework out of the box. Each gets framework-specific build optimizations. Dedicated deploy guides cover each in detail:
Django — Railpack detects manage.py alongside Django in your dependencies. It configures Gunicorn, runs collectstatic to gather static assets, and executes database migrations on each deploy. Full admin panel, ORM, and middleware support with zero configuration.
Flask — Railpack detects Flask in your dependency files and configures Gunicorn as the WSGI server. Use the application factory pattern (create_app()) and Flask-Migrate for schema migrations. Ideal for APIs and smaller web apps.
FastAPI — Railpack detects FastAPI and configures Uvicorn as the ASGI server. Native async support, automatic OpenAPI documentation, and Pydantic validation all work out of the box. The right choice for high-concurrency APIs and WebSocket endpoints.
Generic Python — Any Python app that doesn't match the above frameworks still deploys. Provide a start command via Procfile, pyproject.toml scripts, or the RAILPACK_START_CMD environment variable. This covers Celery workers, data pipelines, scheduled scripts, and custom HTTP servers like Starlette or Tornado.
#Deployment config
Connect your GitHub repository from the AZIN Console. For a Python app with PostgreSQL and Redis:
name: my-python-app
cloud: gcp
region: us-central1
services:
api:
build:
type: railpack
env:
PYTHON_ENV: production
PORT: "8000"
scaling:
min: 1
max: 10
target_cpu: 70
db:
type: postgres
plan: production
cache:
type: redisAZIN injects DATABASE_URL and REDIS_URL into your api service automatically. The db service provisions Cloud SQL in your GCP account. The cache service provisions Memorystore. Both live in your cloud, not on shared infrastructure.
Worker service pattern
Python apps frequently need background processing — Celery, RQ, Dramatiq, or a custom queue consumer. Define workers as separate services that share the same codebase:
name: my-python-app
cloud: gcp
region: us-central1
services:
api:
build:
type: railpack
env:
PYTHON_ENV: production
worker:
build:
type: railpack
start: celery -A myapp worker --loglevel=info
env:
PYTHON_ENV: production
scheduler:
build:
type: railpack
start: celery -A myapp beat --loglevel=info
env:
PYTHON_ENV: production
db:
type: postgres
plan: production
cache:
type: redisAll three services (api, worker, scheduler) receive the same DATABASE_URL and REDIS_URL. Workers scale independently with their own autoscaling configuration. This pattern works with any task queue — Celery with Redis or RabbitMQ, RQ, Dramatiq, or Huey.
Data pipeline / CLI tool
Not every Python app is a web server. For scripts that run on a schedule or as one-shot processes:
name: data-pipeline
cloud: gcp
region: us-central1
services:
etl:
build:
type: railpack
start: python -m pipeline.run
env:
PYTHON_ENV: production
db:
type: postgres
plan: productionNo framework detection needed. Railpack installs your dependencies and runs whatever start command you specify.
#Package manager detection
Railpack detects your package manager from the files in your repository. No configuration needed.
| File present | Package manager | Install command |
|---|---|---|
requirements.txt | pip | pip install -r requirements.txt |
poetry.lock + pyproject.toml | Poetry | poetry install --no-dev |
Pipfile.lock | Pipenv | pipenv install --deploy |
pyproject.toml (with [build-system]) | pip | pip install . |
pip remains the default and most widely supported option. It ships with Python and works with requirements.txt. For most projects, pip freeze > requirements.txt is all you need.
Poetry manages dependencies, virtual environments, and packaging in one tool. If Railpack finds poetry.lock, it uses Poetry automatically. Pin exact versions with poetry lock before deploying.
Pipenv combines pip and virtualenv with a Pipfile lockfile. Railpack detects Pipfile.lock and uses Pipenv for installation. Functional but less actively maintained — Poetry or pip are more widely adopted for new projects.
uv — Astral's Rust-based package manager (from the Ruff team) is gaining rapid adoption with speeds 10-100x faster than pip. As of February 2026, uv generates standard requirements.txt lockfiles via uv pip compile, which Railpack handles natively. Use uv locally for development speed and let Railpack install from the lockfile it generates.
Info
.python-version and lock your dependencies before deploying. Railpack builds are reproducible only when versions are pinned — floating version specifiers can produce different builds from the same source code.#Environment variables and secrets
Python apps read configuration from the environment. AZIN injects infrastructure connection strings automatically and lets you set custom variables through the Console.
Automatically injected variables:
| Variable | Source | When |
|---|---|---|
DATABASE_URL | Cloud SQL | When a postgres service is added |
REDIS_URL | Memorystore | When a redis service is added |
PORT | AZIN | Always |
Custom variables — set through the AZIN Console's environment variables panel:
SECRET_KEY=your-production-secret
PYTHON_ENV=production
ALLOWED_HOSTS=.yourdomain.com
SENTRY_DSN=https://key@sentry.io/projectAll environment variables are encrypted at rest and injected at runtime. They never appear in build logs or container images.
Info
Reading DATABASE_URL in your application:
import os
DATABASE_URL = os.environ["DATABASE_URL"]
# With SQLAlchemy
from sqlalchemy import create_engine
engine = create_engine(DATABASE_URL, pool_pre_ping=True, pool_recycle=300)
# With psycopg directly
import psycopg
conn = psycopg.connect(DATABASE_URL)pool_pre_ping=True prevents stale connection errors after Cloud SQL maintenance windows. pool_recycle=300 closes connections older than 5 minutes, matching Cloud SQL's idle timeout defaults.
#Why AZIN for Python hosting
Your cloud, your data. Python apps and PostgreSQL databases run in your own GCP account. You own the infrastructure, the billing relationship with Google, and the data. AWS and Azure are on our roadmap.
Every framework, detected automatically. Django, Flask, FastAPI, and generic Python — all detected from dependency files. Each gets the correct production server. Gunicorn for WSGI apps, Uvicorn for ASGI, or your custom start command for everything else.
Managed PostgreSQL and Redis. Cloud SQL and Memorystore are provisioned in your GCP account. Automated backups, encryption at rest, and connection strings injected as DATABASE_URL and REDIS_URL. No connection string copy-paste, no separate database provisioning.
No cold starts for production traffic. GKE Autopilot keeps your pods warm and scales horizontally based on CPU load. The first GKE cluster is free — you pay only for pod resources, not cluster overhead. This differs from platforms where a managed Kubernetes cluster can cost ~$225/month in underlying cloud fees before any workloads run (based on typical AWS EKS configurations, as of February 2026).
Scale-to-zero staging. Deploy staging environments on lttle.cloud (in early access). When your staging Python app receives no traffic, it scales to zero and costs nothing. Production stays warm on GKE Autopilot; staging idles without burning compute.
Deploy Guides
Deploy Django with AZIN
Full Django detection with Gunicorn, collectstatic, migrations, and managed PostgreSQL.
#Related guides
- Deploy Django with AZIN — Full Django detection with Gunicorn, collectstatic, and migrations
- Deploy Flask with AZIN — Flask apps with Gunicorn and PostgreSQL
- Deploy FastAPI with AZIN — Async APIs with Uvicorn
- Deploy Docker containers with AZIN — For custom Dockerfile deployments
- Host PostgreSQL on AZIN — Managed Cloud SQL in your own GCP account
#Frequently asked questions
Deploy on private infrastructure
Managed AI environments with built-in isolation. Zero DevOps required.