feat: Production-ready Docker infrastructure with Directus CMS

- Add separated Docker Compose architecture (astro/infrastructure/override)
- Implement Directus + PostgreSQL with pinned versions (10.12.0/15.5-alpine)
- Add comprehensive database safety protections and backup scripts
- Configure production-ready NGINX reverse proxy setup
- Add container names, labels, and enhanced healthchecks
- Remove fallback environment variables for explicit production config
- Include log rotation and monitoring improvements

Infrastructure deployment:
- npm run docker:infrastructure:up (one-time setup)
- npm run docker:astro:up (regular deployments)
- npm run db:backup/restore/status (database management)

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
2025-07-12 19:17:30 -06:00
parent 2e575f894e
commit 6322126b29
11 changed files with 1009 additions and 8 deletions

89
.env.infrastructure Normal file
View File

@@ -0,0 +1,89 @@
# Directus Infrastructure Environment Variables
# Copy this file to .env.infrastructure.local on your server and fill in the values
# =====================================
# REQUIRED: Security Keys & Database
# =====================================
# Generate these with: openssl rand -hex 32
DIRECTUS_KEY=your-directus-key-here-32-chars-minimum-abcdef1234567890
DIRECTUS_SECRET=your-directus-secret-here-32-chars-minimum-abcdef1234567890
# Strong database password
DIRECTUS_DB_PASSWORD=your-secure-database-password-here
# =====================================
# REQUIRED: Admin Account Setup
# =====================================
# Admin account created on first run only
DIRECTUS_ADMIN_EMAIL=admin@blackcanyontickets.com
DIRECTUS_ADMIN_PASSWORD=your-secure-admin-password-here
# =====================================
# REQUIRED: CORS Configuration
# =====================================
# Production domain(s) - REQUIRED, no fallbacks
DIRECTUS_CORS_ORIGIN=https://portal.blackcanyontickets.com
# =====================================
# REQUIRED: Email Configuration
# =====================================
# All email variables are REQUIRED - configure for production
DIRECTUS_EMAIL_FROM=cms@blackcanyontickets.com
DIRECTUS_EMAIL_TRANSPORT=smtp
DIRECTUS_SMTP_HOST=smtp.resend.com
DIRECTUS_SMTP_PORT=587
DIRECTUS_SMTP_USER=resend
DIRECTUS_SMTP_PASSWORD=your-resend-api-key-here
# =====================================
# SETUP INSTRUCTIONS
# =====================================
# 1. Copy this file: cp .env.infrastructure .env.infrastructure.local
# 2. Generate random keys: openssl rand -hex 32
# 3. Set strong passwords for database and admin
# 4. Update CORS origins to match your domain(s)
# 5. Configure email settings if needed
# 6. Load environment: export $(cat .env.infrastructure.local | xargs)
# 7. Start infrastructure: npm run docker:infrastructure:up
# =====================================
# DATABASE INITIALIZATION
# =====================================
# Directus will automatically:
# - Create database tables on first run
# - Set up admin user with DIRECTUS_ADMIN_EMAIL/PASSWORD
# - Initialize storage and extensions directories
# - Apply database migrations
# Check logs if initialization fails:
# docker logs bct-directus
# =====================================
# DATABASE SAFETY PROTECTIONS
# =====================================
# 🚨 IMPORTANT DATABASE SAFETY NOTES:
# 1. Named volumes prevent accidental data loss:
# - postgres_data: PostgreSQL database files
# - directus_uploads: User uploaded files
# - directus_extensions: Custom extensions
# 2. Admin user only created if no users exist
# - Safe to restart containers without overwriting users
# - Set DIRECTUS_ALLOW_ADMIN_CREATION=false after first setup
# 3. To completely reset database (⚠️ DATA LOSS):
# docker-compose -f docker-compose.infrastructure.yml down
# docker volume rm bct-whitelabel_postgres_data
# docker volume rm bct-whitelabel_directus_uploads
# docker volume rm bct-whitelabel_directus_extensions
# 4. To backup before major changes:
# docker exec bct-postgres pg_dump -U directus directus > backup.sql

321
DEPLOYMENT_GUIDE.md Normal file
View File

@@ -0,0 +1,321 @@
# Docker Deployment Guide
This guide covers setting up Black Canyon Tickets with separated Docker Compose files for optimal deployment workflow.
## Overview
- **Astro App**: Rebuilt on each Git deployment
- **Directus + PostgreSQL**: Persistent infrastructure, deployed once
- **NGINX**: Reverse proxy to both services
- **Certbot**: SSL certificates (existing setup)
## Server Setup (One-Time)
### 1. Install Dependencies
```bash
# Update system
sudo apt update && sudo apt upgrade -y
# Install Docker & Docker Compose
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker $USER
# Log out and back in for Docker group to take effect
```
### 2. Clone Repository
```bash
cd /var/www
sudo git clone https://github.com/your-org/bct-whitelabel.git
sudo chown -R $USER:$USER bct-whitelabel
cd bct-whitelabel
```
### 3. Configure Environment
```bash
# Copy infrastructure environment template
cp .env.infrastructure .env.infrastructure.local
# Edit with your production values
nano .env.infrastructure.local
```
**Required values in `.env.infrastructure.local`:**
```bash
# Generate these with: openssl rand -hex 32
DIRECTUS_KEY=your-32-char-random-key-here
DIRECTUS_SECRET=your-32-char-random-secret-here
# Strong passwords
DIRECTUS_DB_PASSWORD=your-secure-db-password
DIRECTUS_ADMIN_PASSWORD=your-secure-admin-password
# Your domain
DIRECTUS_ADMIN_EMAIL=admin@blackcanyontickets.com
DIRECTUS_CORS_ORIGIN=https://portal.blackcanyontickets.com
# Email (optional)
DIRECTUS_SMTP_PASSWORD=your-resend-api-key
```
### 4. Create Docker Network
```bash
# Create shared network for services
docker network create bct-network
```
### 5. Deploy Infrastructure
```bash
# Load environment and start infrastructure
export $(cat .env.infrastructure.local | xargs)
npm run docker:infrastructure:up
# Verify services are running
docker ps
npm run docker:infrastructure:logs
```
### 6. Configure NGINX
```bash
# Copy simplified configuration
sudo cp nginx-example.conf /etc/nginx/sites-available/blackcanyontickets
# Enable site
sudo ln -s /etc/nginx/sites-available/blackcanyontickets /etc/nginx/sites-enabled/
# Test configuration
sudo nginx -t
```
### 7. Setup SSL with Certbot
```bash
# Get SSL certificate (Certbot handles NGINX config automatically)
sudo certbot --nginx -d portal.blackcanyontickets.com
# Reload NGINX with SSL
sudo systemctl reload nginx
```
## Git Deployment Script
Update your deployment script to only rebuild the Astro app:
### Simple Deploy Script
```bash
#!/bin/bash
set -e
echo "Deploying BCT Astro app..."
# Navigate to project directory
cd /var/www/bct-whitelabel
# Pull latest changes
git pull origin main
# Rebuild only Astro app (infrastructure stays running)
npm run docker:astro:up
echo "Deployment complete!"
```
**That's it!** Your infrastructure (Directus + PostgreSQL) keeps running.
## Daily Operations
### Check Service Status
```bash
# View all running containers
docker ps
# Check logs
npm run docker:astro:logs # Astro app logs
npm run docker:infrastructure:logs # Directus + PostgreSQL logs
# Health checks
curl http://localhost:3000/api/health # Astro health
curl http://localhost:8055/server/health # Directus health
```
### Restart Services
```bash
# Restart Astro app only
npm run docker:astro:down
npm run docker:astro:up
# Restart infrastructure (rare)
npm run docker:infrastructure:down
npm run docker:infrastructure:up
```
### View Service URLs
- **Main App**: https://portal.blackcanyontickets.com
- **Directus Admin**: https://portal.blackcanyontickets.com/admin
- **Directus API**: https://portal.blackcanyontickets.com/api/directus
## Backup Strategy
### Database Backup
```bash
# Create backup script
cat > backup-db.sh << 'EOF'
#!/bin/bash
BACKUP_DIR="/var/backups/bct"
DATE=$(date +%Y%m%d_%H%M%S)
mkdir -p $BACKUP_DIR
# Backup PostgreSQL
docker exec bct-whitelabel-postgres-1 pg_dump -U directus directus > $BACKUP_DIR/directus_$DATE.sql
# Keep only last 7 days
find $BACKUP_DIR -name "directus_*.sql" -mtime +7 -delete
echo "Backup completed: $BACKUP_DIR/directus_$DATE.sql"
EOF
chmod +x backup-db.sh
# Add to crontab for daily backups
echo "0 2 * * * /var/www/bct-whitelabel/backup-db.sh" | crontab -
```
### Upload Backup
```bash
# Backup Directus uploads
tar -czf /var/backups/bct/directus_uploads_$(date +%Y%m%d).tar.gz \
-C /var/lib/docker/volumes/bct-whitelabel_directus_uploads/_data .
```
## Troubleshooting
### Common Issues
1. **Services won't start**
```bash
# Check logs
docker logs bct-whitelabel-directus-1
docker logs bct-whitelabel-postgres-1
# Check network
docker network ls | grep bct-network
```
2. **Database connection issues**
```bash
# Verify PostgreSQL is running
docker exec bct-whitelabel-postgres-1 pg_isready -U directus
# Check environment variables
echo $DIRECTUS_DB_PASSWORD
```
3. **NGINX proxy errors**
```bash
# Test NGINX config
sudo nginx -t
# Check upstream connectivity
curl http://localhost:3000
curl http://localhost:8055
```
### Reset Infrastructure (if needed)
```bash
# WARNING: This will delete all Directus data
npm run docker:infrastructure:down
docker volume rm bct-whitelabel_postgres_data bct-whitelabel_directus_uploads bct-whitelabel_directus_extensions
npm run docker:infrastructure:up
```
## Monitoring
### Log Monitoring
```bash
# Real-time logs
tail -f /var/log/nginx/access.log
npm run docker:astro:logs -f
npm run docker:infrastructure:logs -f
# Log rotation (add to /etc/logrotate.d/bct)
/var/www/bct-whitelabel/logs/*.log {
daily
missingok
rotate 7
compress
delaycompress
notifempty
sharedscripts
}
```
### Resource Monitoring
```bash
# Container stats
docker stats
# Disk usage
docker system df
docker volume ls
```
## Auto-Start Services on Boot
### Configure Docker Services to Auto-Start
```bash
# Create systemd service for infrastructure
sudo tee /etc/systemd/system/bct-infrastructure.service > /dev/null << 'EOF'
[Unit]
Description=BCT Infrastructure (Directus + PostgreSQL)
Requires=docker.service
After=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/var/www/bct-whitelabel
ExecStart=/usr/bin/docker-compose -f docker-compose.infrastructure.yml up -d
ExecStop=/usr/bin/docker-compose -f docker-compose.infrastructure.yml down
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target
EOF
# Enable and start the service
sudo systemctl enable bct-infrastructure.service
sudo systemctl start bct-infrastructure.service
```
### One-Command Astro Redeploy
Add this to your server for quick deployments:
```bash
# Create deployment alias
echo 'alias redeploy-bct="cd /var/www/bct-whitelabel && git pull && npm run docker:astro:up"' >> ~/.bashrc
source ~/.bashrc
# Now you can simply run:
redeploy-bct
```
This setup provides a robust, maintainable deployment pipeline where your Astro app can be updated frequently while keeping your CMS and database stable.

View File

@@ -0,0 +1,30 @@
# Directus Extensions Directory
This directory is for version-controlled Directus extensions.
## Usage Options:
### Option 1: Volume Mount (Current)
- Extensions are stored in Docker volume
- Persist through container restarts
- Not version controlled
### Option 2: Bind Mount (Version Controlled)
- Change docker-compose.infrastructure.yml to:
```yaml
- ./directus/extensions:/directus/extensions
```
- Extensions are version controlled in this directory
- Deployed with your application code
## Directory Structure:
```
directus/extensions/
├── hooks/ # Server-side hooks
├── endpoints/ # Custom API endpoints
├── interfaces/ # Admin panel interfaces
├── displays/ # Field display components
└── modules/ # Admin panel modules
```
For production, consider Option 2 to version control your custom extensions.

47
docker-compose.astro.yml Normal file
View File

@@ -0,0 +1,47 @@
version: '3.8'
services:
bct-app:
build:
context: .
dockerfile: Dockerfile
target: production
container_name: bct-astro
ports:
- "3000:3000"
labels:
- "com.blackcanyon.role=astro-app"
- "maintainer=tyler@crispygoat.com"
environment:
- NODE_ENV=production
- HOST=0.0.0.0
- PORT=3000
# Supabase
- PUBLIC_SUPABASE_URL=${PUBLIC_SUPABASE_URL}
- PUBLIC_SUPABASE_ANON_KEY=${PUBLIC_SUPABASE_ANON_KEY}
- SUPABASE_SERVICE_ROLE_KEY=${SUPABASE_SERVICE_ROLE_KEY}
# Stripe
- STRIPE_PUBLISHABLE_KEY=${STRIPE_PUBLISHABLE_KEY}
- STRIPE_SECRET_KEY=${STRIPE_SECRET_KEY}
- STRIPE_WEBHOOK_SECRET=${STRIPE_WEBHOOK_SECRET}
# Email
- RESEND_API_KEY=${RESEND_API_KEY}
# Monitoring
- SENTRY_DSN=${SENTRY_DSN}
- SENTRY_RELEASE=${SENTRY_RELEASE}
volumes:
- ./logs:/app/logs
restart: unless-stopped
healthcheck:
test: ["CMD", "node", "-e", "const http=require('http');const options={hostname:'localhost',port:3000,path:'/api/health',timeout:2000};const req=http.request(options,(res)=>{process.exit(res.statusCode===200?0:1)});req.on('error',()=>{process.exit(1)});req.end();"]
interval: 30s
timeout: 5s
retries: 5
start_period: 40s
networks:
- bct-network
networks:
default:
external:
name: bct-network

View File

@@ -0,0 +1,101 @@
version: '3.8'
services:
postgres:
image: postgres:15.5-alpine
container_name: bct-postgres
environment:
POSTGRES_DB: directus
POSTGRES_USER: directus
POSTGRES_PASSWORD: ${DIRECTUS_DB_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -U directus -d directus"]
interval: 10s
timeout: 5s
retries: 5
networks:
- bct-network
directus:
image: directus/directus:10.12.0
container_name: bct-directus
ports:
- "8055:8055"
environment:
KEY: ${DIRECTUS_KEY}
SECRET: ${DIRECTUS_SECRET}
# Database
DB_CLIENT: pg
DB_HOST: postgres
DB_PORT: 5432
DB_DATABASE: directus
DB_USER: directus
DB_PASSWORD: ${DIRECTUS_DB_PASSWORD}
# Security
CORS_ENABLED: true
CORS_ORIGIN: ${DIRECTUS_CORS_ORIGIN}
# Database initialization & safety
DB_INIT_TIMEOUT: 60000
DB_EXCLUDE_DEFAULTS: false
# Admin user (only creates if no users exist)
ADMIN_EMAIL: ${DIRECTUS_ADMIN_EMAIL}
ADMIN_PASSWORD: ${DIRECTUS_ADMIN_PASSWORD}
# Safety: Prevent database reinitialization
DB_RESET_ON_START: false
# Storage
STORAGE_LOCATIONS: local
STORAGE_LOCAL_ROOT: /directus/uploads
# Cache & Session
CACHE_ENABLED: false
RATE_LIMITER_ENABLED: true
RATE_LIMITER_POINTS: 25
RATE_LIMITER_DURATION: 1
# Email (optional - configure in .env.infrastructure.local)
EMAIL_FROM: ${DIRECTUS_EMAIL_FROM}
EMAIL_TRANSPORT: ${DIRECTUS_EMAIL_TRANSPORT}
EMAIL_SMTP_HOST: ${DIRECTUS_SMTP_HOST}
EMAIL_SMTP_PORT: ${DIRECTUS_SMTP_PORT}
EMAIL_SMTP_USER: ${DIRECTUS_SMTP_USER}
EMAIL_SMTP_PASSWORD: ${DIRECTUS_SMTP_PASSWORD}
volumes:
- directus_uploads:/directus/uploads
# Extensions: Choose one option below
- directus_extensions:/directus/extensions # Option 1: Docker volume (not version controlled)
# - ./directus/extensions:/directus/extensions # Option 2: Bind mount (version controlled)
restart: unless-stopped
depends_on:
postgres:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8055/server/health"]
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
networks:
- bct-network
volumes:
postgres_data:
driver: local
directus_uploads:
driver: local
directus_extensions:
driver: local
networks:
default:
external:
name: bct-network

134
docker-compose.override.yml Normal file
View File

@@ -0,0 +1,134 @@
version: '3.8'
# Override file for local development
# This file is automatically loaded by docker-compose up
# It combines both Astro app and infrastructure for full local development
services:
# Astro app (from docker-compose.astro.yml)
bct-app:
build:
context: .
dockerfile: Dockerfile
target: production
container_name: bct-astro-dev
ports:
- "3000:3000"
labels:
- "com.blackcanyon.role=astro-app"
- "com.blackcanyon.env=development"
- "maintainer=tyler@crispygoat.com"
environment:
- NODE_ENV=development
- HOST=0.0.0.0
- PORT=3000
# Supabase
- PUBLIC_SUPABASE_URL=${PUBLIC_SUPABASE_URL}
- PUBLIC_SUPABASE_ANON_KEY=${PUBLIC_SUPABASE_ANON_KEY}
- SUPABASE_SERVICE_ROLE_KEY=${SUPABASE_SERVICE_ROLE_KEY}
# Stripe
- STRIPE_PUBLISHABLE_KEY=${STRIPE_PUBLISHABLE_KEY}
- STRIPE_SECRET_KEY=${STRIPE_SECRET_KEY}
- STRIPE_WEBHOOK_SECRET=${STRIPE_WEBHOOK_SECRET}
# Email
- RESEND_API_KEY=${RESEND_API_KEY}
# Monitoring
- SENTRY_DSN=${SENTRY_DSN}
- SENTRY_RELEASE=development
volumes:
- ./logs:/app/logs
restart: unless-stopped
healthcheck:
test: ["CMD", "node", "-e", "const http=require('http');const options={hostname:'localhost',port:3000,path:'/api/health',timeout:2000};const req=http.request(options,(res)=>{process.exit(res.statusCode===200?0:1)});req.on('error',()=>{process.exit(1)});req.end();"]
interval: 30s
timeout: 5s
retries: 5
start_period: 40s
networks:
- bct-network
depends_on:
directus:
condition: service_healthy
# PostgreSQL (from docker-compose.infrastructure.yml)
postgres:
image: postgres:15.5-alpine
container_name: bct-postgres-dev
environment:
POSTGRES_DB: directus
POSTGRES_USER: directus
POSTGRES_PASSWORD: ${DIRECTUS_DB_PASSWORD:-directus_dev_password}
volumes:
- postgres_data_dev:/var/lib/postgresql/data
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -U directus -d directus"]
interval: 10s
timeout: 5s
retries: 5
networks:
- bct-network
# Directus (from docker-compose.infrastructure.yml)
directus:
image: directus/directus:10.12.0
container_name: bct-directus-dev
ports:
- "8055:8055"
environment:
KEY: ${DIRECTUS_KEY:-development-key-12345678901234567890123456789012}
SECRET: ${DIRECTUS_SECRET:-development-secret-abcdef}
# Database
DB_CLIENT: pg
DB_HOST: postgres
DB_PORT: 5432
DB_DATABASE: directus
DB_USER: directus
DB_PASSWORD: ${DIRECTUS_DB_PASSWORD:-directus_dev_password}
# Security
CORS_ENABLED: true
CORS_ORIGIN: http://localhost:3000,http://localhost:4321
# Admin user (development)
ADMIN_EMAIL: ${DIRECTUS_ADMIN_EMAIL:-admin@localhost}
ADMIN_PASSWORD: ${DIRECTUS_ADMIN_PASSWORD:-admin123}
# Storage
STORAGE_LOCATIONS: local
STORAGE_LOCAL_ROOT: /directus/uploads
# Development settings
CACHE_ENABLED: false
LOG_LEVEL: debug
RATE_LIMITER_ENABLED: false
volumes:
- directus_uploads_dev:/directus/uploads
- directus_extensions_dev:/directus/extensions
restart: unless-stopped
depends_on:
postgres:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8055/server/health"]
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
networks:
- bct-network
volumes:
postgres_data_dev:
driver: local
directus_uploads_dev:
driver: local
directus_extensions_dev:
driver: local
networks:
default:
external:
name: bct-network

View File

@@ -3,8 +3,13 @@ version: '3.8'
services: services:
bct-app: bct-app:
image: bct-whitelabel:latest image: bct-whitelabel:latest
container_name: bct-astro-prod
ports: ports:
- "3000:3000" - "3000:3000"
labels:
- "com.blackcanyon.role=astro-app"
- "com.blackcanyon.env=production"
- "maintainer=tyler@crispygoat.com"
environment: environment:
- NODE_ENV=production - NODE_ENV=production
- HOST=0.0.0.0 - HOST=0.0.0.0
@@ -21,7 +26,7 @@ services:
- RESEND_API_KEY=${RESEND_API_KEY} - RESEND_API_KEY=${RESEND_API_KEY}
# Monitoring # Monitoring
- SENTRY_DSN=${SENTRY_DSN} - SENTRY_DSN=${SENTRY_DSN}
- SENTRY_RELEASE=${SENTRY_RELEASE:-latest} - SENTRY_RELEASE=${SENTRY_RELEASE}
env_file: env_file:
- .env - .env
volumes: volumes:
@@ -31,7 +36,7 @@ services:
healthcheck: healthcheck:
test: ["CMD", "node", "-e", "const http=require('http');const options={hostname:'localhost',port:3000,path:'/api/health',timeout:2000};const req=http.request(options,(res)=>{process.exit(res.statusCode===200?0:1)});req.on('error',()=>{process.exit(1)});req.end();"] test: ["CMD", "node", "-e", "const http=require('http');const options={hostname:'localhost',port:3000,path:'/api/health',timeout:2000};const req=http.request(options,(res)=>{process.exit(res.statusCode===200?0:1)});req.on('error',()=>{process.exit(1)});req.end();"]
interval: 30s interval: 30s
timeout: 10s timeout: 5s
retries: 5 retries: 5
start_period: 60s start_period: 60s
networks: networks:
@@ -51,8 +56,6 @@ services:
cpus: '0.5' cpus: '0.5'
networks: networks:
bct-network: default:
driver: bridge external:
ipam: name: bct-network
config:
- subnet: 172.20.0.0/16

45
logrotate-bct Normal file
View File

@@ -0,0 +1,45 @@
# Log rotation configuration for Black Canyon Tickets
# Copy this file to /etc/logrotate.d/bct on your server
/var/www/bct-whitelabel/logs/*.log {
# Rotate daily
daily
# Keep 7 days of logs
rotate 7
# Compress old logs (saves disk space)
compress
# Don't compress the most recent rotated file
delaycompress
# Don't error if log file is missing
missingok
# Don't rotate empty files
notifempty
# Create new log file with specific permissions
create 644 tyler tyler
# Use date as suffix for rotated files
dateext
# Run commands after rotation
postrotate
# Send SIGUSR1 to Docker containers to reopen log files (if needed)
# docker kill --signal=USR1 bct-astro 2>/dev/null || true
endscript
}
# Docker container logs (optional)
/var/lib/docker/containers/*/*-json.log {
daily
rotate 7
compress
delaycompress
missingok
notifempty
copytruncate
}

57
nginx-example.conf Normal file
View File

@@ -0,0 +1,57 @@
# NGINX Configuration for Black Canyon Tickets + Directus
# Copy to /etc/nginx/sites-available/blackcanyontickets
server {
listen 80;
listen 443 ssl http2;
server_name portal.blackcanyontickets.com;
# SSL Configuration - Certbot will handle this
ssl_certificate /etc/letsencrypt/live/portal.blackcanyontickets.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/portal.blackcanyontickets.com/privkey.pem;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
# Redirect HTTP to HTTPS
if ($scheme != "https") {
return 301 https://$host$request_uri;
}
# Directus Admin - Route /admin to Directus
location /admin {
rewrite ^/admin/(.*) /$1 break;
proxy_pass http://localhost:8055;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 100M;
}
# Directus API - Route /api/directus to Directus
location /api/directus {
rewrite ^/api/directus/(.*) /$1 break;
proxy_pass http://localhost:8055;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 100M;
}
# Main Astro app - All other routes
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}

View File

@@ -21,7 +21,19 @@
"docker:down": "docker-compose down", "docker:down": "docker-compose down",
"docker:logs": "docker-compose logs -f", "docker:logs": "docker-compose logs -f",
"docker:prod:up": "docker-compose -f docker-compose.prod.yml up -d", "docker:prod:up": "docker-compose -f docker-compose.prod.yml up -d",
"docker:prod:down": "docker-compose -f docker-compose.prod.yml down" "docker:prod:down": "docker-compose -f docker-compose.prod.yml down",
"docker:astro:up": "docker-compose -f docker-compose.astro.yml up -d --build",
"docker:astro:down": "docker-compose -f docker-compose.astro.yml down",
"docker:astro:logs": "docker-compose -f docker-compose.astro.yml logs -f",
"docker:infrastructure:up": "docker-compose -f docker-compose.infrastructure.yml up -d",
"docker:infrastructure:down": "docker-compose -f docker-compose.infrastructure.yml down",
"docker:infrastructure:logs": "docker-compose -f docker-compose.infrastructure.yml logs -f",
"docker:dev": "docker-compose up -d",
"docker:dev:build": "docker-compose up -d --build",
"db:backup": "./scripts/db-safety.sh backup",
"db:restore": "./scripts/db-safety.sh restore",
"db:reset": "./scripts/db-safety.sh reset",
"db:status": "./scripts/db-safety.sh status"
}, },
"dependencies": { "dependencies": {
"@astrojs/check": "^0.9.4", "@astrojs/check": "^0.9.4",

162
scripts/db-safety.sh Executable file
View File

@@ -0,0 +1,162 @@
#!/bin/bash
# Database Safety Script for BCT Infrastructure
# Provides safe database operations with confirmations
set -e
RED='\033[0;31m'
YELLOW='\033[1;33m'
GREEN='\033[0;32m'
NC='\033[0m' # No Color
PROJECT_DIR="/var/www/bct-whitelabel"
BACKUP_DIR="$PROJECT_DIR/backups"
# Ensure we're in the right directory
cd "$PROJECT_DIR"
# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"
function backup_database() {
echo -e "${GREEN}Creating database backup...${NC}"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="$BACKUP_DIR/directus_backup_$TIMESTAMP.sql"
if docker ps | grep -q "bct-postgres"; then
docker exec bct-postgres pg_dump -U directus directus > "$BACKUP_FILE"
echo -e "${GREEN}✅ Backup created: $BACKUP_FILE${NC}"
else
echo -e "${RED}❌ PostgreSQL container not running${NC}"
exit 1
fi
}
function restore_database() {
echo -e "${YELLOW}Available backups:${NC}"
ls -la "$BACKUP_DIR"/*.sql 2>/dev/null || echo "No backups found"
read -p "Enter backup filename: " BACKUP_FILE
if [ ! -f "$BACKUP_DIR/$BACKUP_FILE" ]; then
echo -e "${RED}❌ Backup file not found${NC}"
exit 1
fi
echo -e "${RED}⚠️ WARNING: This will overwrite the current database!${NC}"
read -p "Type 'CONFIRM' to proceed: " CONFIRM
if [ "$CONFIRM" != "CONFIRM" ]; then
echo "Operation cancelled"
exit 1
fi
# Stop Directus to prevent conflicts
docker-compose -f docker-compose.infrastructure.yml stop directus
# Restore database
docker exec bct-postgres dropdb -U directus directus --if-exists
docker exec bct-postgres createdb -U directus directus
docker exec -i bct-postgres psql -U directus directus < "$BACKUP_DIR/$BACKUP_FILE"
# Restart Directus
docker-compose -f docker-compose.infrastructure.yml up -d directus
echo -e "${GREEN}✅ Database restored from $BACKUP_FILE${NC}"
}
function reset_database() {
echo -e "${RED}⚠️ WARNING: This will PERMANENTLY DELETE all database data!${NC}"
echo -e "${RED}This includes:${NC}"
echo -e "${RED}- All Directus content and collections${NC}"
echo -e "${RED}- All user accounts${NC}"
echo -e "${RED}- All uploaded files${NC}"
echo -e "${RED}- All extensions${NC}"
echo ""
echo -e "${YELLOW}Volumes that will be deleted:${NC}"
echo "- bct-whitelabel_postgres_data"
echo "- bct-whitelabel_directus_uploads"
echo "- bct-whitelabel_directus_extensions"
echo ""
read -p "Type 'DELETE_EVERYTHING' to confirm: " CONFIRM
if [ "$CONFIRM" != "DELETE_EVERYTHING" ]; then
echo "Operation cancelled"
exit 1
fi
# Create final backup before deletion
echo -e "${YELLOW}Creating final backup before deletion...${NC}"
backup_database
# Stop and remove containers
docker-compose -f docker-compose.infrastructure.yml down
# Remove volumes
docker volume rm bct-whitelabel_postgres_data bct-whitelabel_directus_uploads bct-whitelabel_directus_extensions
echo -e "${GREEN}✅ Database completely reset${NC}"
echo -e "${YELLOW}To recreate infrastructure: npm run docker:infrastructure:up${NC}"
}
function check_status() {
echo -e "${GREEN}Infrastructure Status:${NC}"
echo ""
# Check containers
if docker ps | grep -q "bct-postgres"; then
echo -e "PostgreSQL: ${GREEN}✅ Running${NC}"
else
echo -e "PostgreSQL: ${RED}❌ Not running${NC}"
fi
if docker ps | grep -q "bct-directus"; then
echo -e "Directus: ${GREEN}✅ Running${NC}"
else
echo -e "Directus: ${RED}❌ Not running${NC}"
fi
echo ""
# Check volumes
echo -e "${GREEN}Data Volumes:${NC}"
docker volume ls | grep bct-whitelabel || echo "No volumes found"
echo ""
# Check recent backups
echo -e "${GREEN}Recent Backups:${NC}"
ls -la "$BACKUP_DIR"/*.sql 2>/dev/null | tail -5 || echo "No backups found"
}
# Main menu
case "$1" in
backup)
backup_database
;;
restore)
restore_database
;;
reset)
reset_database
;;
status)
check_status
;;
*)
echo "BCT Database Safety Script"
echo ""
echo "Usage: $0 {backup|restore|reset|status}"
echo ""
echo "Commands:"
echo " backup - Create database backup"
echo " restore - Restore from backup (with confirmation)"
echo " reset - Complete database reset (with confirmation)"
echo " status - Check infrastructure status"
echo ""
exit 1
;;
esac