Deploy Streamlit with AZIN
Streamlit turns Python scripts into interactive web apps. Deploying them to production requires a container, the right port configuration, and a platform that supports long-running WebSocket connections.
#Prerequisites
Before deploying, you need:
- Python 3.9+ installed locally (Streamlit 1.54 requires Python 3.9 or higher, as of March 2026)
- A Streamlit app — even a single
app.pyfile works - A
requirements.txtlisting your dependencies - A GitHub repository connected to AZIN (for automated deploys)
Minimal requirements.txt for a Streamlit app:
streamlit>=1.54
pandas>=2.2
Add your data science dependencies as needed — NumPy, Plotly, Scikit-learn.
#Project structure
my-streamlit-app/
├── app.py # Main Streamlit application
├── requirements.txt # Python dependencies
├── .streamlit/
│ └── config.toml # Streamlit server configuration
├── .python-version # Pin Python version (e.g., 3.13)
├── pages/ # Multi-page app (optional)
│ ├── 1_dashboard.py
│ └── 2_analysis.py
└── data/ # Static data files (optional)
└── sample.csv
Streamlit auto-discovers files in the pages/ directory and creates a sidebar navigation. The filenames determine page order and display names.
#Configure for production
Streamlit reads configuration from .streamlit/config.toml in your project root. The production-critical settings:
# .streamlit/config.toml
[server]
headless = true
address = "0.0.0.0"
port = 8501
enableCORS = true
enableXsrfProtection = true
maxUploadSize = 50
[browser]
gatherUsageStats = false
[theme]
primaryColor = "#7A9AF0"Key settings explained:
headless = true— prevents Streamlit from trying to open a browser window. Required in any container or cloud environment.address = "0.0.0.0"— binds to all network interfaces. Without this, Streamlit defaults tolocalhostand is unreachable outside the container.port = 8501— Streamlit's default port. Match this with yourEXPOSEdirective and platform configuration.gatherUsageStats = false— disables telemetry in production.
You can also set these via environment variables. Every config option maps to STREAMLIT_SERVER_HEADLESS=true, STREAMLIT_SERVER_PORT=8501, and so on.
#Write a production-ready app
A minimal Streamlit app with proper resource handling for production:
# app.py
import streamlit as st
import pandas as pd
st.set_page_config(
page_title="My Dashboard",
page_icon="📊",
layout="wide",
initial_sidebar_state="expanded",
)
st.title("Production Dashboard")
@st.cache_data(ttl=3600)
def load_data():
"""Load and cache data. TTL prevents stale data in production."""
return pd.read_csv("data/sample.csv")
data = load_data()
st.dataframe(data, use_container_width=True)The caching decorators to know:
@st.cache_data— caches the return value of data-loading functions. Set attl(time to live in seconds) to refresh periodically. Without caching, every user session re-runs your data pipeline.@st.cache_resource— caches heavyweight objects like ML models or database connections that should be shared across all sessions. Use this for anything expensive to initialize.
@st.cache_resource
def load_model():
"""Load ML model once, share across all sessions."""
import joblib
return joblib.load("model.pkl")#Dockerfile
Multi-stage build for a lean production image:
Build tools like build-essential are discarded after the first stage. A typical Streamlit app image comes in under 500MB with this approach, compared to 1GB+ without multi-stage builds.
Info
.dockerignore excludes .streamlit/secrets.toml intentionally. Streamlit secrets should come from environment variables in production, not from files baked into the container image.The HEALTHCHECK uses Streamlit's built-in /_stcore/health endpoint (returns "ok" when ready). We use Python's urllib instead of curl because python:3.13-slim does not ship with curl — no extra packages needed.
#Deploy with AZIN
AZIN uses Railpack to auto-detect your project. When it finds requirements.txt with streamlit as a dependency, it identifies a Python application and builds accordingly.
Connect your GitHub repository to AZIN and configure the deployment:
name: my-streamlit-app
cloud: gcp
region: us-central1
services:
app:
build:
type: railpack
env:
STREAMLIT_SERVER_HEADLESS: "true"
STREAMLIT_SERVER_PORT: "8501"
STREAMLIT_SERVER_ADDRESS: "0.0.0.0"
STREAMLIT_BROWSER_GATHER_USAGE_STATS: "false"
scaling:
min: 1
max: 5
target_cpu: 70Railpack resolves your Python version from .python-version first, then runtime.txt, and defaults to Python 3.13 if neither is present. Dependencies are installed automatically from your requirements.txt, pyproject.toml, or Pipfile.
You can override the start command if your entry point isn't app.py:
services:
app:
build:
type: railpack
start: streamlit run dashboard/main.py --server.port=8501 --server.address=0.0.0.0 --server.headless=trueIf you prefer the Dockerfile approach, AZIN detects and builds from your Dockerfile automatically — no additional configuration needed. Place the Dockerfile at your project root and AZIN uses it instead of Railpack.
AZIN deploys to your own GCP account via GKE Autopilot. The first GKE cluster is free — you pay only for the pods running your application. Your Streamlit app runs in your cloud, not on shared infrastructure. AWS support is on our roadmap.
WebSocket support
Streamlit relies on WebSocket connections for real-time UI updates. Every widget interaction — slider moves, button clicks, file uploads — travels over a persistent WebSocket between the browser and server. AZIN's load balancer supports WebSocket connections natively. No additional proxy configuration required.
This is a critical distinction from serverless platforms. Serverless functions terminate after a response is sent, which kills the WebSocket connection and breaks Streamlit's interactive model entirely.
#Deploy with Docker (any platform)
If you're deploying to any platform that supports Docker containers:
# Build the image
docker build -t my-streamlit-app .
# Run locally to verify
docker run -p 8501:8501 my-streamlit-app
# Tag and push to your container registry
docker tag my-streamlit-app gcr.io/my-project/streamlit-app:latest
docker push gcr.io/my-project/streamlit-app:latestThe container exposes port 8501 and handles all server configuration via the ENTRYPOINT. No environment variables or runtime flags needed — everything is set in the Dockerfile.
#Production considerations
Memory management
Streamlit holds a Python thread and UI state objects per active user session. RAM usage scales linearly with concurrent users. A Streamlit app with moderate data processing can consume 200-500MB per session.
Strategies to control memory:
- Use
@st.cache_datawithttlon all data-loading functions. Cached data is shared across sessions, so 10 users viewing the same dataset don't load it 10 times. - Use
@st.cache_resourcefor ML models and database connections. These are initialized once and shared globally. - Set
max_entrieson cache decorators to limit how many cached results are stored:@st.cache_data(max_entries=100). - Avoid storing large DataFrames in
st.session_state— they persist for the lifetime of the session.
For a data dashboard with moderate complexity, plan for 512MB-1GB per pod with 2-5 concurrent users per pod.
Health checks
Streamlit exposes a built-in health endpoint at /_stcore/health. Configure your platform to poll this endpoint:
services:
app:
health_check:
path: /_stcore/health
interval: 30
timeout: 10The endpoint returns a plain-text "ok" response with a 200 status code when the server is operational.
Session affinity
Because Streamlit maintains per-user session state via WebSockets, your load balancer must route the same user to the same pod. GKE Autopilot handles this automatically. If deploying behind a custom load balancer, enable sticky sessions (session affinity) based on client IP or cookies.
HTTPS and reverse proxy
When running behind a reverse proxy (Nginx, Caddy, or a cloud load balancer), ensure WebSocket upgrade headers are forwarded. AZIN handles this automatically. For a custom Nginx setup:
location / {
proxy_pass http://streamlit:8501;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 86400;
}The proxy_read_timeout 86400 prevents Nginx from closing idle WebSocket connections. Without this, users see their app disconnect after 60 seconds of inactivity.
#Why AZIN for Streamlit hosting
Streamlit apps often process sensitive datasets — financial reports, patient data, internal metrics. AZIN deploys to your own GCP account, so the data never leaves your infrastructure. You own the billing relationship with Google and full access to the underlying resources.
- WebSocket support out of the box. AZIN's load balancer handles WebSocket connections natively — no proxy configuration, no timeouts to tune.
- No cold starts. GKE Autopilot keeps pods warm when
min: 1is set. The first cluster is free; you pay only for pod resources. - Scale-to-zero staging on lttle.cloud (in early access) — staging Streamlit apps that receive no traffic cost nothing.
#Related guides
- Deploy Python with AZIN — General Python deployment, all frameworks
- Deploy Docker containers with AZIN — For custom Dockerfile deployments
- Deploy FastAPI with AZIN — Build the API backend that feeds your Streamlit dashboard
- What is BYOC? — Why deploying to your own cloud matters
#Frequently asked questions
Deploy Streamlit to your own cloud
Push your Streamlit app, AZIN deploys it to your GCP account. WebSocket support, auto-scaling, and zero infrastructure management.
Auto-deploy into your own cloud
Push code, AZIN handles the rest. Auto-detected builds, your cloud account, no vendor lock-in.