Deploy Node.js with AZIN
Production Node.js hosting requires a persistent process manager, correct PORT binding, and a clean build pipeline for compiled assets or TypeScript. AZIN handles all of it. Connect your GitHub repository, and Railpack auto-detects your Node.js version, package manager, and framework — then deploys to your own GCP account on every push.
#How AZIN detects Node.js
AZIN uses Railpack — a zero-config builder that auto-detects your language and framework. When it finds a package.json, it identifies a Node.js project and configures the build without any additional setup.
Node.js version resolution — Railpack checks these sources in order:
engines.nodefield inpackage.json—"engines": { "node": "22.x" }.nvmrc— a file containing22orlts/jod.node-version— same format as.nvmrc- Default: Node.js 22 LTS
Package manager detection — Railpack reads the lockfile present in your repo:
| Lockfile | Manager |
|---|---|
package-lock.json | npm |
yarn.lock | yarn |
pnpm-lock.yaml | pnpm |
bun.lockb | bun |
No configuration needed. The correct package manager is used automatically.
Framework detection — Railpack recognizes Next.js, Remix, Nuxt, Vite, and Astro from package.json dependencies and applies framework-specific optimizations. For Express.js, Fastify, Koa, NestJS, or any other Node.js HTTP server, Railpack falls back to your start script in package.json.
Start command resolution:
{
"scripts": {
"build": "tsc",
"start": "node dist/index.js"
}
}Railpack runs npm run build (if present), then npm run start. No Procfile needed.
#Deployment config
Connect your GitHub repo from the AZIN Console. For most Node.js apps, zero config is needed. Add an azin.yaml when you need explicit control over scaling, environment variables, or multiple services.
Express.js / generic Node.js API
name: my-node-api
cloud: gcp
region: us-central1
services:
api:
build:
type: railpack
env:
NODE_ENV: production
PORT: "3000"
scaling:
min: 1
max: 10
target_cpu: 70
db:
type: postgres
plan: production
cache:
type: redisWorker queue pattern — API with background worker
For apps that process jobs from a queue, define separate services for the API and the worker. Both share the same codebase and the same DATABASE_URL / REDIS_URL:
name: my-node-app
cloud: gcp
region: us-central1
services:
api:
build:
type: railpack
env:
NODE_ENV: production
worker:
build:
type: railpack
start: node workers/queue-consumer.js
env:
NODE_ENV: production
db:
type: postgres
plan: production
cache:
type: redisThe start override in the worker service tells Railpack to use a custom entry point instead of the default npm run start. AZIN provisions Redis alongside PostgreSQL and injects both connection strings automatically.
#PORT and environment variables
Node.js apps must listen on the port provided by the runtime environment. AZIN injects PORT as an environment variable. Your app must bind to it:
const port = process.env.PORT || 3000;
app.listen(port, () => {
console.log(`Server running on port ${port}`);
});Hard-coding a port (e.g., app.listen(3000)) will cause health checks to fail. Always use process.env.PORT.
Standard environment variables AZIN injects:
| Variable | Source | Value |
|---|---|---|
PORT | AZIN | Assigned port for the service |
NODE_ENV | Your azin.yaml | Set to production |
DATABASE_URL | Cloud SQL | Full Postgres connection string |
REDIS_URL | Memorystore | Redis connection string |
DATABASE_URL and REDIS_URL are injected automatically when you add db and cache service blocks. Secrets and custom variables are set through the AZIN Console's environment variables panel — values are encrypted at rest and never appear in build logs or container images.
Info
#TypeScript support
Railpack handles TypeScript compilation as part of the build step. If your package.json has a build script that runs tsc, tsx, or another compiler, Railpack runs it before starting the app.
A typical TypeScript project structure:
{
"scripts": {
"build": "tsc --project tsconfig.json",
"start": "node dist/index.js"
},
"devDependencies": {
"typescript": "^5.0.0",
"@types/node": "^22.0.0"
}
}Railpack produces a compiled image from your dist/ output. The TypeScript compiler and dev dependencies are not included in the final container image — only the compiled JavaScript and production dependencies.
For projects using tsx for direct TypeScript execution:
{
"scripts": {
"start": "tsx src/index.ts"
},
"dependencies": {
"tsx": "^4.0.0"
}
}NestJS uses its own build pipeline (@nestjs/cli) that compiles to dist/main.js. Railpack detects NestJS and uses nest build followed by node dist/main.js.
#Worker queues and background jobs
Node.js background workers are a first-class service type in AZIN. Define them as separate worker services in azin.yaml alongside your API service.
BullMQ pattern (Redis-backed queue):
// workers/queue-consumer.js
import { Worker } from 'bullmq';
import Redis from 'ioredis';
const connection = new Redis(process.env.REDIS_URL);
const worker = new Worker(
'email-queue',
async (job) => {
await sendEmail(job.data);
},
{ connection }
);AZIN provisions Memorystore (Redis) in your GCP account when you add type: redis to your services. The REDIS_URL is injected into all services in the project — both your API and your worker connect to the same Redis instance.
pg-boss pattern (PostgreSQL-backed queue):
If you prefer to skip Redis and use PostgreSQL as your queue backend, pg-boss works with the DATABASE_URL AZIN injects:
import PgBoss from 'pg-boss';
const boss = new PgBoss(process.env.DATABASE_URL);
await boss.start();
boss.work('send-email', async (job) => {
await sendEmail(job.data);
});Workers scale independently of the API. Set different scaling configs per service if your worker needs more CPU or memory than the API.
Deploy Guides
Deploy Next.js with AZIN
Building on Next.js? Full SSR, App Router, and API Routes — on your own cloud.
#Why AZIN for Node.js hosting
Your cloud, your data. Node.js hosting on AZIN runs inside your own GCP account. Your application data, database, and logs stay within your VPC. You own the billing relationship with Google directly. AWS and Azure are on our roadmap.
Every package manager, detected automatically. npm, yarn, pnpm, and bun — all detected from lockfiles, no configuration required. This covers the full range of modern Node.js projects without any azin.yaml changes.
Managed PostgreSQL and Redis provisioned alongside your app. Cloud SQL and Memorystore live in your GCP account, not on shared infrastructure. Connection strings are injected automatically. No separate database provisioning, no connection string copy-paste.
No cold starts for production traffic. GKE Autopilot keeps your pods warm and scales horizontally based on CPU load. The first GKE cluster is free — you pay only for pod resources, not cluster overhead. This differs from platforms where a managed Kubernetes cluster can cost ~$225/month in underlying cloud fees before any workloads run (based on typical AWS EKS configurations, as of February 2026).
Scale-to-zero staging. Deploy staging environments on lttle.cloud (in early access). When your staging Node.js app receives no traffic, it scales to zero and costs nothing. Production stays warm on GKE Autopilot; staging idles without burning compute.
#Related guides
- Deploy Next.js with AZIN — Next.js-specific optimizations, App Router, output tracing
- Deploy Go with AZIN — Static binary deployment for Go APIs
- Deploy Docker containers with AZIN — Custom Dockerfile when you need full control over the image
- Host PostgreSQL on AZIN — Managed Cloud SQL in your own GCP account
- AZIN vs Railway — Node.js hosting: shared infrastructure vs your own cloud
- AZIN vs Render — Pricing, regions, and BYOC compared
#Frequently asked questions
Deploy on private infrastructure
Managed AI environments with built-in isolation. Zero DevOps required.