Deploy Flask with AZIN

All guides
Flask
Deploy Guides·1 min read

Deploy Flask with AZIN

deployflaskpythonpostgresql

Flask apps need a production WSGI server, environment variables, a database connection, and a process to handle migrations before they're ready to serve traffic. AZIN handles all of it: Railpack auto-detects Flask, configures Gunicorn, provisions PostgreSQL in your GCP account, and injects DATABASE_URL automatically. Push to GitHub and your Flask app deploys.

#How AZIN detects Flask

AZIN uses Railpack to auto-detect your project type. When it finds a Python dependency file alongside Flask in your dependencies, it identifies a Flask app and configures the build accordingly.

Detection files Railpack looks for:

  • requirements.txt
  • pyproject.toml
  • Pipfile
  • setup.py

Once detected, Railpack resolves your Python version from .python-version first, then runtime.txt, and defaults to Python 3.13 if neither is present. Package manager detection is automatic — pip, Poetry, and Pipenv are all supported.

What Railpack configures automatically for Flask:

  • Python version from .python-version or runtime.txt (default: 3.13)
  • Dependencies installed via pip, Poetry, or Pipenv based on your lockfile
  • Gunicorn as the production WSGI server — no Nginx, no reverse proxy configuration
  • Start command set to gunicorn with your app module

No Procfile, no Dockerfile, no manual server configuration. If you already have a Dockerfile, AZIN supports that too — but for most Flask apps, Railpack handles everything.

#Deployment config

Connect your GitHub repository to AZIN and it deploys on every push. For a Flask app with PostgreSQL and Redis:

name: my-flask-app
cloud: gcp
region: us-central1
services:
  web:
    build:
      type: railpack
    env:
      FLASK_ENV: production
      FLASK_APP: "myapp:create_app()"
    scaling:
      min: 1
      max: 8
      target_cpu: 70
  db:
    type: postgres
    plan: production
  cache:
    type: redis

AZIN injects DATABASE_URL and REDIS_URL into your web service automatically. The db service provisions Cloud SQL in your GCP account. The cache service provisions Memorystore. Both live in your cloud, not on shared infrastructure.

#Flask app configuration for production

Flask 3.x ships with sensible defaults but requires explicit configuration for production. The recommended pattern is the application factory — a create_app() function that configures the app and returns it:

# myapp/__init__.py
import os
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_migrate import Migrate
 
db = SQLAlchemy()
migrate = Migrate()
 
def create_app():
    app = Flask(__name__)
 
    app.config["SECRET_KEY"] = os.environ["SECRET_KEY"]
    app.config["SQLALCHEMY_DATABASE_URI"] = os.environ["DATABASE_URL"]
    app.config["SQLALCHEMY_ENGINE_OPTIONS"] = {
        "pool_pre_ping": True,
        "pool_recycle": 300,
    }
    app.config["DEBUG"] = False
 
    db.init_app(app)
    migrate.init_app(app, db)
 
    from myapp.routes import main
    app.register_blueprint(main)
 
    return app

Key points:

  • SECRET_KEY must come from an environment variable — never hardcode it
  • DATABASE_URL is injected by AZIN automatically when you add a db service
  • pool_pre_ping prevents stale connection errors after database restarts
  • DEBUG must be False in production — Flask's debug mode exposes an interactive debugger

Your requirements.txt for a Flask app with PostgreSQL and Redis:

Flask>=3.0
gunicorn>=23.0
Flask-SQLAlchemy>=3.1
Flask-Migrate>=4.0
psycopg[binary]>=3.2
redis>=5.0

psycopg[binary] is the PostgreSQL adapter. Use the binary distribution in production — it bundles native libraries and avoids build-time compilation.

Info

Never use flask run in production. It starts Flask's built-in development server, which is single-threaded and not designed for concurrent traffic. Railpack configures Gunicorn automatically — flask run is for local development only.

#Running database migrations

Flask-Migrate wraps Alembic and adds flask db commands for managing schema migrations:

# Generate a migration from model changes
flask db migrate -m "add users table"
 
# Apply migrations to the database
flask db upgrade

The migrations/ directory must be committed to your repository — it contains the Alembic version history. For automatic migrations on each deploy, add a release command to your azin.yaml:

services:
  web:
    build:
      type: railpack
    deploy:
      release_command: "flask db upgrade"
    env:
      FLASK_ENV: production
      FLASK_APP: "myapp:create_app()"

The release_command runs after the build but before the new version receives traffic. If the migration fails, the deploy is cancelled and the previous version continues serving requests. Your database is never left in a partially migrated state.

#Why AZIN for Flask hosting

Your cloud, your data. Flask apps and PostgreSQL databases run in your own GCP account. You own the infrastructure, the billing relationship with Google, and the data. AZIN is the control plane — not the cloud provider. AWS and Azure are on our roadmap.

No cold starts. Flask runs on GKE Autopilot, which keeps at least one pod warm at all times when min: 1 is set. Serverless Flask deployments on Cloud Run or Lambda suffer from cold starts on the first request. Kubernetes pods don't have this problem — traffic hits a running process, not a container initializing from scratch.

Managed PostgreSQL and Redis. Cloud SQL and Memorystore are provisioned in your GCP account. Automated backups, connection pooling, and encryption at rest included. DATABASE_URL and REDIS_URL are injected at runtime — your Flask app reads them from the environment with no manual setup.

Scale-to-zero staging. Deploy staging environments on lttle.cloud (in early access). When your staging Flask app receives no traffic, it scales to zero and costs nothing. Production stays warm; staging idles without burning compute.

Deploy Guides

Deploy Django with AZIN

Django's full-featured alternative. Same zero-config detection, managed Postgres, GCP BYOC.

#Frequently asked questions

Deploy on private infrastructure

Managed AI environments with built-in isolation. Zero DevOps required.