refactor: Remove Directus infrastructure, simplify Docker deployment

- Remove Directus CMS infrastructure (docker-compose.infrastructure.yml)
- Simplify to Astro-only deployment using existing Supabase backend
- Clean up docker-compose.override.yml to focus on local development
- Update NGINX config to proxy only to Astro app
- Remove Directus-related npm scripts and database management tools
- Streamline deployment guide for Supabase + Astro architecture

Deployment workflow:
- Local: npm run docker:dev (Astro + Supabase hosted)
- Production: npm run docker:astro:up (Astro only)

Benefits:
- Simpler architecture with proven Supabase backend
- Faster deployments (Astro only)
- Zero database downtime
- Reduced operational complexity

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
2025-07-12 19:23:10 -06:00
parent 6322126b29
commit 14917a3e13
8 changed files with 149 additions and 651 deletions

View File

@@ -1,89 +0,0 @@
# Directus Infrastructure Environment Variables
# Copy this file to .env.infrastructure.local on your server and fill in the values
# =====================================
# REQUIRED: Security Keys & Database
# =====================================
# Generate these with: openssl rand -hex 32
DIRECTUS_KEY=your-directus-key-here-32-chars-minimum-abcdef1234567890
DIRECTUS_SECRET=your-directus-secret-here-32-chars-minimum-abcdef1234567890
# Strong database password
DIRECTUS_DB_PASSWORD=your-secure-database-password-here
# =====================================
# REQUIRED: Admin Account Setup
# =====================================
# Admin account created on first run only
DIRECTUS_ADMIN_EMAIL=admin@blackcanyontickets.com
DIRECTUS_ADMIN_PASSWORD=your-secure-admin-password-here
# =====================================
# REQUIRED: CORS Configuration
# =====================================
# Production domain(s) - REQUIRED, no fallbacks
DIRECTUS_CORS_ORIGIN=https://portal.blackcanyontickets.com
# =====================================
# REQUIRED: Email Configuration
# =====================================
# All email variables are REQUIRED - configure for production
DIRECTUS_EMAIL_FROM=cms@blackcanyontickets.com
DIRECTUS_EMAIL_TRANSPORT=smtp
DIRECTUS_SMTP_HOST=smtp.resend.com
DIRECTUS_SMTP_PORT=587
DIRECTUS_SMTP_USER=resend
DIRECTUS_SMTP_PASSWORD=your-resend-api-key-here
# =====================================
# SETUP INSTRUCTIONS
# =====================================
# 1. Copy this file: cp .env.infrastructure .env.infrastructure.local
# 2. Generate random keys: openssl rand -hex 32
# 3. Set strong passwords for database and admin
# 4. Update CORS origins to match your domain(s)
# 5. Configure email settings if needed
# 6. Load environment: export $(cat .env.infrastructure.local | xargs)
# 7. Start infrastructure: npm run docker:infrastructure:up
# =====================================
# DATABASE INITIALIZATION
# =====================================
# Directus will automatically:
# - Create database tables on first run
# - Set up admin user with DIRECTUS_ADMIN_EMAIL/PASSWORD
# - Initialize storage and extensions directories
# - Apply database migrations
# Check logs if initialization fails:
# docker logs bct-directus
# =====================================
# DATABASE SAFETY PROTECTIONS
# =====================================
# 🚨 IMPORTANT DATABASE SAFETY NOTES:
# 1. Named volumes prevent accidental data loss:
# - postgres_data: PostgreSQL database files
# - directus_uploads: User uploaded files
# - directus_extensions: Custom extensions
# 2. Admin user only created if no users exist
# - Safe to restart containers without overwriting users
# - Set DIRECTUS_ALLOW_ADMIN_CREATION=false after first setup
# 3. To completely reset database (⚠️ DATA LOSS):
# docker-compose -f docker-compose.infrastructure.yml down
# docker volume rm bct-whitelabel_postgres_data
# docker volume rm bct-whitelabel_directus_uploads
# docker volume rm bct-whitelabel_directus_extensions
# 4. To backup before major changes:
# docker exec bct-postgres pg_dump -U directus directus > backup.sql

View File

@@ -1,12 +1,12 @@
# Docker Deployment Guide # Docker Deployment Guide
This guide covers setting up Black Canyon Tickets with separated Docker Compose files for optimal deployment workflow. This guide covers setting up Black Canyon Tickets with optimized Docker deployment for your Astro application.
## Overview ## Overview
- **Astro App**: Rebuilt on each Git deployment - **Astro App**: Rebuilt on each Git deployment using existing Supabase backend
- **Directus + PostgreSQL**: Persistent infrastructure, deployed once - **Database**: Uses your existing hosted Supabase PostgreSQL + Auth
- **NGINX**: Reverse proxy to both services - **NGINX**: Reverse proxy to Astro application
- **Certbot**: SSL certificates (existing setup) - **Certbot**: SSL certificates (existing setup)
## Server Setup (One-Time) ## Server Setup (One-Time)
@@ -36,55 +36,31 @@ cd bct-whitelabel
### 3. Configure Environment ### 3. Configure Environment
```bash Your application uses Supabase (hosted) so just ensure your `.env` file has:
# Copy infrastructure environment template
cp .env.infrastructure .env.infrastructure.local
# Edit with your production values ```bash
nano .env.infrastructure.local # Supabase (your existing hosted database)
PUBLIC_SUPABASE_URL=https://zctjaivtfyfxokfaemek.supabase.co
PUBLIC_SUPABASE_ANON_KEY=your-anon-key
SUPABASE_SERVICE_ROLE_KEY=your-service-role-key
# Stripe
STRIPE_PUBLISHABLE_KEY=pk_...
STRIPE_SECRET_KEY=sk_...
STRIPE_WEBHOOK_SECRET=whsec_...
# Email
RESEND_API_KEY=re_...
# Monitoring
SENTRY_DSN=https://...
SENTRY_RELEASE=production
``` ```
**Required values in `.env.infrastructure.local`:** ### 4. Configure NGINX
```bash
# Generate these with: openssl rand -hex 32
DIRECTUS_KEY=your-32-char-random-key-here
DIRECTUS_SECRET=your-32-char-random-secret-here
# Strong passwords
DIRECTUS_DB_PASSWORD=your-secure-db-password
DIRECTUS_ADMIN_PASSWORD=your-secure-admin-password
# Your domain
DIRECTUS_ADMIN_EMAIL=admin@blackcanyontickets.com
DIRECTUS_CORS_ORIGIN=https://portal.blackcanyontickets.com
# Email (optional)
DIRECTUS_SMTP_PASSWORD=your-resend-api-key
```
### 4. Create Docker Network
```bash ```bash
# Create shared network for services # Copy example configuration
docker network create bct-network
```
### 5. Deploy Infrastructure
```bash
# Load environment and start infrastructure
export $(cat .env.infrastructure.local | xargs)
npm run docker:infrastructure:up
# Verify services are running
docker ps
npm run docker:infrastructure:logs
```
### 6. Configure NGINX
```bash
# Copy simplified configuration
sudo cp nginx-example.conf /etc/nginx/sites-available/blackcanyontickets sudo cp nginx-example.conf /etc/nginx/sites-available/blackcanyontickets
# Enable site # Enable site
@@ -94,7 +70,7 @@ sudo ln -s /etc/nginx/sites-available/blackcanyontickets /etc/nginx/sites-enable
sudo nginx -t sudo nginx -t
``` ```
### 7. Setup SSL with Certbot ### 5. Setup SSL with Certbot
```bash ```bash
# Get SSL certificate (Certbot handles NGINX config automatically) # Get SSL certificate (Certbot handles NGINX config automatically)
@@ -104,9 +80,19 @@ sudo certbot --nginx -d portal.blackcanyontickets.com
sudo systemctl reload nginx sudo systemctl reload nginx
``` ```
### 6. Set Up Log Rotation
```bash
# Install log rotation
sudo cp logrotate-bct /etc/logrotate.d/bct
# Test log rotation
sudo logrotate -d /etc/logrotate.d/bct
```
## Git Deployment Script ## Git Deployment Script
Update your deployment script to only rebuild the Astro app: Update your deployment script to rebuild only the Astro app:
### Simple Deploy Script ### Simple Deploy Script
@@ -122,169 +108,136 @@ cd /var/www/bct-whitelabel
# Pull latest changes # Pull latest changes
git pull origin main git pull origin main
# Rebuild only Astro app (infrastructure stays running) # Rebuild Astro app
npm run docker:astro:up npm run docker:astro:up
echo "Deployment complete!" echo "Deployment complete!"
``` ```
**That's it!** Your infrastructure (Directus + PostgreSQL) keeps running. **Your Supabase database stays online** - no downtime for deployments!
## Daily Operations ## Daily Operations
### Check Service Status ### Check Service Status
```bash ```bash
# View all running containers # View running containers
docker ps docker ps
# Check logs # Check logs
npm run docker:astro:logs # Astro app logs npm run docker:astro:logs
npm run docker:infrastructure:logs # Directus + PostgreSQL logs
# Health checks # Health checks
curl http://localhost:3000/api/health # Astro health curl http://localhost:3000/api/health
curl http://localhost:8055/server/health # Directus health
``` ```
### Restart Services ### Restart Services
```bash ```bash
# Restart Astro app only # Restart Astro app
npm run docker:astro:down npm run docker:astro:down
npm run docker:astro:up npm run docker:astro:up
# Restart infrastructure (rare)
npm run docker:infrastructure:down
npm run docker:infrastructure:up
``` ```
### View Service URLs ### Service URLs
- **Main App**: https://portal.blackcanyontickets.com - **Main App**: https://portal.blackcanyontickets.com
- **Directus Admin**: https://portal.blackcanyontickets.com/admin - **Admin Panel**: https://portal.blackcanyontickets.com/admin
- **Directus API**: https://portal.blackcanyontickets.com/api/directus - **Health Check**: https://portal.blackcanyontickets.com/api/health
## Available Commands
### Docker Commands
```bash
# Production deployment (Astro only)
npm run docker:astro:up # Deploy Astro app
npm run docker:astro:down # Stop Astro app
npm run docker:astro:logs # View Astro logs
# Production (pre-built image)
npm run docker:prod:up # Deploy pre-built image
npm run docker:prod:down # Stop production image
# Local development
npm run docker:dev # Start development container
npm run docker:dev:build # Start with rebuild
```
## Backup Strategy ## Backup Strategy
### Database Backup ### Supabase Backups
Since you're using hosted Supabase:
- **Automatic backups** are handled by Supabase
- **Point-in-time recovery** available through Supabase dashboard
- **Manual exports** can be done through Supabase SQL editor
### Application Backups
```bash ```bash
# Create backup script # Create backup script for logs and uploads
cat > backup-db.sh << 'EOF' cat > backup-app.sh << 'EOF'
#!/bin/bash #!/bin/bash
BACKUP_DIR="/var/backups/bct" BACKUP_DIR="/var/backups/bct"
DATE=$(date +%Y%m%d_%H%M%S) DATE=$(date +%Y%m%d_%H%M%S)
mkdir -p $BACKUP_DIR mkdir -p $BACKUP_DIR
# Backup PostgreSQL # Backup logs
docker exec bct-whitelabel-postgres-1 pg_dump -U directus directus > $BACKUP_DIR/directus_$DATE.sql tar -czf $BACKUP_DIR/logs_$DATE.tar.gz logs/
# Keep only last 7 days # Keep only last 7 days
find $BACKUP_DIR -name "directus_*.sql" -mtime +7 -delete find $BACKUP_DIR -name "*.tar.gz" -mtime +7 -delete
echo "Backup completed: $BACKUP_DIR/directus_$DATE.sql" echo "Backup completed: $BACKUP_DIR"
EOF EOF
chmod +x backup-db.sh chmod +x backup-app.sh
# Add to crontab for daily backups # Add to crontab for daily backups
echo "0 2 * * * /var/www/bct-whitelabel/backup-db.sh" | crontab - echo "0 2 * * * /var/www/bct-whitelabel/backup-app.sh" | crontab -
```
### Upload Backup
```bash
# Backup Directus uploads
tar -czf /var/backups/bct/directus_uploads_$(date +%Y%m%d).tar.gz \
-C /var/lib/docker/volumes/bct-whitelabel_directus_uploads/_data .
``` ```
## Troubleshooting ## Troubleshooting
### Common Issues ### Common Issues
1. **Services won't start** 1. **Container won't start**
```bash ```bash
# Check logs # Check logs
docker logs bct-whitelabel-directus-1 docker logs bct-astro
docker logs bct-whitelabel-postgres-1
# Check network
docker network ls | grep bct-network
```
2. **Database connection issues**
```bash
# Verify PostgreSQL is running
docker exec bct-whitelabel-postgres-1 pg_isready -U directus
# Check environment variables # Check environment variables
echo $DIRECTUS_DB_PASSWORD env | grep SUPABASE
``` ```
3. **NGINX proxy errors** 2. **NGINX proxy errors**
```bash ```bash
# Test NGINX config # Test NGINX config
sudo nginx -t sudo nginx -t
# Check upstream connectivity # Check upstream connectivity
curl http://localhost:3000 curl http://localhost:3000/api/health
curl http://localhost:8055
``` ```
### Reset Infrastructure (if needed) 3. **SSL certificate issues**
```bash
```bash # Renew certificate
# WARNING: This will delete all Directus data sudo certbot renew
npm run docker:infrastructure:down
docker volume rm bct-whitelabel_postgres_data bct-whitelabel_directus_uploads bct-whitelabel_directus_extensions # Check certificate status
npm run docker:infrastructure:up sudo certbot certificates
``` ```
## Monitoring
### Log Monitoring
```bash
# Real-time logs
tail -f /var/log/nginx/access.log
npm run docker:astro:logs -f
npm run docker:infrastructure:logs -f
# Log rotation (add to /etc/logrotate.d/bct)
/var/www/bct-whitelabel/logs/*.log {
daily
missingok
rotate 7
compress
delaycompress
notifempty
sharedscripts
}
```
### Resource Monitoring
```bash
# Container stats
docker stats
# Disk usage
docker system df
docker volume ls
```
## Auto-Start Services on Boot ## Auto-Start Services on Boot
### Configure Docker Services to Auto-Start ### Configure Docker Services to Auto-Start
```bash ```bash
# Create systemd service for infrastructure # Create systemd service for Astro app
sudo tee /etc/systemd/system/bct-infrastructure.service > /dev/null << 'EOF' sudo tee /etc/systemd/system/bct-astro.service > /dev/null << 'EOF'
[Unit] [Unit]
Description=BCT Infrastructure (Directus + PostgreSQL) Description=BCT Astro Application
Requires=docker.service Requires=docker.service
After=docker.service After=docker.service
@@ -292,8 +245,8 @@ After=docker.service
Type=oneshot Type=oneshot
RemainAfterExit=yes RemainAfterExit=yes
WorkingDirectory=/var/www/bct-whitelabel WorkingDirectory=/var/www/bct-whitelabel
ExecStart=/usr/bin/docker-compose -f docker-compose.infrastructure.yml up -d ExecStart=/usr/bin/docker-compose -f docker-compose.astro.yml up -d
ExecStop=/usr/bin/docker-compose -f docker-compose.infrastructure.yml down ExecStop=/usr/bin/docker-compose -f docker-compose.astro.yml down
TimeoutStartSec=0 TimeoutStartSec=0
[Install] [Install]
@@ -301,11 +254,11 @@ WantedBy=multi-user.target
EOF EOF
# Enable and start the service # Enable and start the service
sudo systemctl enable bct-infrastructure.service sudo systemctl enable bct-astro.service
sudo systemctl start bct-infrastructure.service sudo systemctl start bct-astro.service
``` ```
### One-Command Astro Redeploy ### One-Command Deployment
Add this to your server for quick deployments: Add this to your server for quick deployments:
@@ -318,4 +271,31 @@ source ~/.bashrc
redeploy-bct redeploy-bct
``` ```
This setup provides a robust, maintainable deployment pipeline where your Astro app can be updated frequently while keeping your CMS and database stable. ## Monitoring
### Log Monitoring
```bash
# Real-time logs
tail -f /var/log/nginx/access.log
npm run docker:astro:logs -f
# Container stats
docker stats bct-astro
# Disk usage
docker system df
```
### Resource Monitoring
```bash
# Container resource usage
docker stats
# System resources
htop
df -h
```
This setup provides a robust, maintainable deployment pipeline where your Astro app can be updated frequently while your Supabase database remains stable and always available.

View File

@@ -1,30 +0,0 @@
# Directus Extensions Directory
This directory is for version-controlled Directus extensions.
## Usage Options:
### Option 1: Volume Mount (Current)
- Extensions are stored in Docker volume
- Persist through container restarts
- Not version controlled
### Option 2: Bind Mount (Version Controlled)
- Change docker-compose.infrastructure.yml to:
```yaml
- ./directus/extensions:/directus/extensions
```
- Extensions are version controlled in this directory
- Deployed with your application code
## Directory Structure:
```
directus/extensions/
├── hooks/ # Server-side hooks
├── endpoints/ # Custom API endpoints
├── interfaces/ # Admin panel interfaces
├── displays/ # Field display components
└── modules/ # Admin panel modules
```
For production, consider Option 2 to version control your custom extensions.

View File

@@ -1,101 +0,0 @@
version: '3.8'
services:
postgres:
image: postgres:15.5-alpine
container_name: bct-postgres
environment:
POSTGRES_DB: directus
POSTGRES_USER: directus
POSTGRES_PASSWORD: ${DIRECTUS_DB_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -U directus -d directus"]
interval: 10s
timeout: 5s
retries: 5
networks:
- bct-network
directus:
image: directus/directus:10.12.0
container_name: bct-directus
ports:
- "8055:8055"
environment:
KEY: ${DIRECTUS_KEY}
SECRET: ${DIRECTUS_SECRET}
# Database
DB_CLIENT: pg
DB_HOST: postgres
DB_PORT: 5432
DB_DATABASE: directus
DB_USER: directus
DB_PASSWORD: ${DIRECTUS_DB_PASSWORD}
# Security
CORS_ENABLED: true
CORS_ORIGIN: ${DIRECTUS_CORS_ORIGIN}
# Database initialization & safety
DB_INIT_TIMEOUT: 60000
DB_EXCLUDE_DEFAULTS: false
# Admin user (only creates if no users exist)
ADMIN_EMAIL: ${DIRECTUS_ADMIN_EMAIL}
ADMIN_PASSWORD: ${DIRECTUS_ADMIN_PASSWORD}
# Safety: Prevent database reinitialization
DB_RESET_ON_START: false
# Storage
STORAGE_LOCATIONS: local
STORAGE_LOCAL_ROOT: /directus/uploads
# Cache & Session
CACHE_ENABLED: false
RATE_LIMITER_ENABLED: true
RATE_LIMITER_POINTS: 25
RATE_LIMITER_DURATION: 1
# Email (optional - configure in .env.infrastructure.local)
EMAIL_FROM: ${DIRECTUS_EMAIL_FROM}
EMAIL_TRANSPORT: ${DIRECTUS_EMAIL_TRANSPORT}
EMAIL_SMTP_HOST: ${DIRECTUS_SMTP_HOST}
EMAIL_SMTP_PORT: ${DIRECTUS_SMTP_PORT}
EMAIL_SMTP_USER: ${DIRECTUS_SMTP_USER}
EMAIL_SMTP_PASSWORD: ${DIRECTUS_SMTP_PASSWORD}
volumes:
- directus_uploads:/directus/uploads
# Extensions: Choose one option below
- directus_extensions:/directus/extensions # Option 1: Docker volume (not version controlled)
# - ./directus/extensions:/directus/extensions # Option 2: Bind mount (version controlled)
restart: unless-stopped
depends_on:
postgres:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8055/server/health"]
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
networks:
- bct-network
volumes:
postgres_data:
driver: local
directus_uploads:
driver: local
directus_extensions:
driver: local
networks:
default:
external:
name: bct-network

View File

@@ -2,10 +2,10 @@ version: '3.8'
# Override file for local development # Override file for local development
# This file is automatically loaded by docker-compose up # This file is automatically loaded by docker-compose up
# It combines both Astro app and infrastructure for full local development # Simplified development setup with just the Astro app
services: services:
# Astro app (from docker-compose.astro.yml) # Astro app for local development
bct-app: bct-app:
build: build:
context: . context: .
@@ -22,7 +22,7 @@ services:
- NODE_ENV=development - NODE_ENV=development
- HOST=0.0.0.0 - HOST=0.0.0.0
- PORT=3000 - PORT=3000
# Supabase # Supabase (your existing hosted database)
- PUBLIC_SUPABASE_URL=${PUBLIC_SUPABASE_URL} - PUBLIC_SUPABASE_URL=${PUBLIC_SUPABASE_URL}
- PUBLIC_SUPABASE_ANON_KEY=${PUBLIC_SUPABASE_ANON_KEY} - PUBLIC_SUPABASE_ANON_KEY=${PUBLIC_SUPABASE_ANON_KEY}
- SUPABASE_SERVICE_ROLE_KEY=${SUPABASE_SERVICE_ROLE_KEY} - SUPABASE_SERVICE_ROLE_KEY=${SUPABASE_SERVICE_ROLE_KEY}
@@ -46,89 +46,7 @@ services:
start_period: 40s start_period: 40s
networks: networks:
- bct-network - bct-network
depends_on:
directus:
condition: service_healthy
# PostgreSQL (from docker-compose.infrastructure.yml)
postgres:
image: postgres:15.5-alpine
container_name: bct-postgres-dev
environment:
POSTGRES_DB: directus
POSTGRES_USER: directus
POSTGRES_PASSWORD: ${DIRECTUS_DB_PASSWORD:-directus_dev_password}
volumes:
- postgres_data_dev:/var/lib/postgresql/data
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -U directus -d directus"]
interval: 10s
timeout: 5s
retries: 5
networks:
- bct-network
# Directus (from docker-compose.infrastructure.yml)
directus:
image: directus/directus:10.12.0
container_name: bct-directus-dev
ports:
- "8055:8055"
environment:
KEY: ${DIRECTUS_KEY:-development-key-12345678901234567890123456789012}
SECRET: ${DIRECTUS_SECRET:-development-secret-abcdef}
# Database
DB_CLIENT: pg
DB_HOST: postgres
DB_PORT: 5432
DB_DATABASE: directus
DB_USER: directus
DB_PASSWORD: ${DIRECTUS_DB_PASSWORD:-directus_dev_password}
# Security
CORS_ENABLED: true
CORS_ORIGIN: http://localhost:3000,http://localhost:4321
# Admin user (development)
ADMIN_EMAIL: ${DIRECTUS_ADMIN_EMAIL:-admin@localhost}
ADMIN_PASSWORD: ${DIRECTUS_ADMIN_PASSWORD:-admin123}
# Storage
STORAGE_LOCATIONS: local
STORAGE_LOCAL_ROOT: /directus/uploads
# Development settings
CACHE_ENABLED: false
LOG_LEVEL: debug
RATE_LIMITER_ENABLED: false
volumes:
- directus_uploads_dev:/directus/uploads
- directus_extensions_dev:/directus/extensions
restart: unless-stopped
depends_on:
postgres:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8055/server/health"]
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
networks:
- bct-network
volumes:
postgres_data_dev:
driver: local
directus_uploads_dev:
driver: local
directus_extensions_dev:
driver: local
networks: networks:
default: bct-network:
external: driver: bridge
name: bct-network

View File

@@ -1,4 +1,4 @@
# NGINX Configuration for Black Canyon Tickets + Directus # NGINX Configuration for Black Canyon Tickets
# Copy to /etc/nginx/sites-available/blackcanyontickets # Copy to /etc/nginx/sites-available/blackcanyontickets
server { server {
@@ -20,31 +20,7 @@ server {
return 301 https://$host$request_uri; return 301 https://$host$request_uri;
} }
# Directus Admin - Route /admin to Directus # Astro app - All routes
location /admin {
rewrite ^/admin/(.*) /$1 break;
proxy_pass http://localhost:8055;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 100M;
}
# Directus API - Route /api/directus to Directus
location /api/directus {
rewrite ^/api/directus/(.*) /$1 break;
proxy_pass http://localhost:8055;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 100M;
}
# Main Astro app - All other routes
location / { location / {
proxy_pass http://localhost:3000; proxy_pass http://localhost:3000;
proxy_http_version 1.1; proxy_http_version 1.1;
@@ -52,6 +28,19 @@ server {
proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Proto $scheme;
# Health check endpoint
location = /api/health {
access_log off;
proxy_pass http://localhost:3000;
}
} }
# Static file optimization
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
proxy_pass http://localhost:3000;
expires 1y;
add_header Cache-Control "public, immutable";
add_header X-Content-Type-Options nosniff;
}
} }

View File

@@ -25,15 +25,8 @@
"docker:astro:up": "docker-compose -f docker-compose.astro.yml up -d --build", "docker:astro:up": "docker-compose -f docker-compose.astro.yml up -d --build",
"docker:astro:down": "docker-compose -f docker-compose.astro.yml down", "docker:astro:down": "docker-compose -f docker-compose.astro.yml down",
"docker:astro:logs": "docker-compose -f docker-compose.astro.yml logs -f", "docker:astro:logs": "docker-compose -f docker-compose.astro.yml logs -f",
"docker:infrastructure:up": "docker-compose -f docker-compose.infrastructure.yml up -d",
"docker:infrastructure:down": "docker-compose -f docker-compose.infrastructure.yml down",
"docker:infrastructure:logs": "docker-compose -f docker-compose.infrastructure.yml logs -f",
"docker:dev": "docker-compose up -d", "docker:dev": "docker-compose up -d",
"docker:dev:build": "docker-compose up -d --build", "docker:dev:build": "docker-compose up -d --build"
"db:backup": "./scripts/db-safety.sh backup",
"db:restore": "./scripts/db-safety.sh restore",
"db:reset": "./scripts/db-safety.sh reset",
"db:status": "./scripts/db-safety.sh status"
}, },
"dependencies": { "dependencies": {
"@astrojs/check": "^0.9.4", "@astrojs/check": "^0.9.4",

View File

@@ -1,162 +0,0 @@
#!/bin/bash
# Database Safety Script for BCT Infrastructure
# Provides safe database operations with confirmations
set -e
RED='\033[0;31m'
YELLOW='\033[1;33m'
GREEN='\033[0;32m'
NC='\033[0m' # No Color
PROJECT_DIR="/var/www/bct-whitelabel"
BACKUP_DIR="$PROJECT_DIR/backups"
# Ensure we're in the right directory
cd "$PROJECT_DIR"
# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"
function backup_database() {
echo -e "${GREEN}Creating database backup...${NC}"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="$BACKUP_DIR/directus_backup_$TIMESTAMP.sql"
if docker ps | grep -q "bct-postgres"; then
docker exec bct-postgres pg_dump -U directus directus > "$BACKUP_FILE"
echo -e "${GREEN}✅ Backup created: $BACKUP_FILE${NC}"
else
echo -e "${RED}❌ PostgreSQL container not running${NC}"
exit 1
fi
}
function restore_database() {
echo -e "${YELLOW}Available backups:${NC}"
ls -la "$BACKUP_DIR"/*.sql 2>/dev/null || echo "No backups found"
read -p "Enter backup filename: " BACKUP_FILE
if [ ! -f "$BACKUP_DIR/$BACKUP_FILE" ]; then
echo -e "${RED}❌ Backup file not found${NC}"
exit 1
fi
echo -e "${RED}⚠️ WARNING: This will overwrite the current database!${NC}"
read -p "Type 'CONFIRM' to proceed: " CONFIRM
if [ "$CONFIRM" != "CONFIRM" ]; then
echo "Operation cancelled"
exit 1
fi
# Stop Directus to prevent conflicts
docker-compose -f docker-compose.infrastructure.yml stop directus
# Restore database
docker exec bct-postgres dropdb -U directus directus --if-exists
docker exec bct-postgres createdb -U directus directus
docker exec -i bct-postgres psql -U directus directus < "$BACKUP_DIR/$BACKUP_FILE"
# Restart Directus
docker-compose -f docker-compose.infrastructure.yml up -d directus
echo -e "${GREEN}✅ Database restored from $BACKUP_FILE${NC}"
}
function reset_database() {
echo -e "${RED}⚠️ WARNING: This will PERMANENTLY DELETE all database data!${NC}"
echo -e "${RED}This includes:${NC}"
echo -e "${RED}- All Directus content and collections${NC}"
echo -e "${RED}- All user accounts${NC}"
echo -e "${RED}- All uploaded files${NC}"
echo -e "${RED}- All extensions${NC}"
echo ""
echo -e "${YELLOW}Volumes that will be deleted:${NC}"
echo "- bct-whitelabel_postgres_data"
echo "- bct-whitelabel_directus_uploads"
echo "- bct-whitelabel_directus_extensions"
echo ""
read -p "Type 'DELETE_EVERYTHING' to confirm: " CONFIRM
if [ "$CONFIRM" != "DELETE_EVERYTHING" ]; then
echo "Operation cancelled"
exit 1
fi
# Create final backup before deletion
echo -e "${YELLOW}Creating final backup before deletion...${NC}"
backup_database
# Stop and remove containers
docker-compose -f docker-compose.infrastructure.yml down
# Remove volumes
docker volume rm bct-whitelabel_postgres_data bct-whitelabel_directus_uploads bct-whitelabel_directus_extensions
echo -e "${GREEN}✅ Database completely reset${NC}"
echo -e "${YELLOW}To recreate infrastructure: npm run docker:infrastructure:up${NC}"
}
function check_status() {
echo -e "${GREEN}Infrastructure Status:${NC}"
echo ""
# Check containers
if docker ps | grep -q "bct-postgres"; then
echo -e "PostgreSQL: ${GREEN}✅ Running${NC}"
else
echo -e "PostgreSQL: ${RED}❌ Not running${NC}"
fi
if docker ps | grep -q "bct-directus"; then
echo -e "Directus: ${GREEN}✅ Running${NC}"
else
echo -e "Directus: ${RED}❌ Not running${NC}"
fi
echo ""
# Check volumes
echo -e "${GREEN}Data Volumes:${NC}"
docker volume ls | grep bct-whitelabel || echo "No volumes found"
echo ""
# Check recent backups
echo -e "${GREEN}Recent Backups:${NC}"
ls -la "$BACKUP_DIR"/*.sql 2>/dev/null | tail -5 || echo "No backups found"
}
# Main menu
case "$1" in
backup)
backup_database
;;
restore)
restore_database
;;
reset)
reset_database
;;
status)
check_status
;;
*)
echo "BCT Database Safety Script"
echo ""
echo "Usage: $0 {backup|restore|reset|status}"
echo ""
echo "Commands:"
echo " backup - Create database backup"
echo " restore - Restore from backup (with confirmation)"
echo " reset - Complete database reset (with confirmation)"
echo " status - Check infrastructure status"
echo ""
exit 1
;;
esac