Created migration file: - migrations/001_create_audit_logs_table.sql * Optimized for existing 4wdcsa database schema * 7 columns: log_id, user_id, action, status, ip_address, details, created_at * 8 indexes for performance (primary + 7 covering common queries) * Foreign key to users table with ON DELETE SET NULL * JSON column for flexible metadata storage * Supports all action types (login, payment, booking, membership) * Includes sample monitoring queries Created deployment guide: - DATABASE_MIGRATION_GUIDE.md * 3 deployment options (phpMyAdmin, CLI, GUI tool) * Pre/post deployment checklists * Verification queries * Rollback procedures * Performance impact analysis * Monitoring query examples * Integration instructions Ready for immediate deployment to production!
5.8 KiB
Database Migration & Deployment Guide
Pre-Deployment Checklist
✅ Phase 2 Code Implementation: Complete (committed to git)
✅ Database Schema Analysis: Complete
✅ Migration Script Created: migrations/001_create_audit_logs_table.sql
How to Deploy the Migration
Option 1: phpMyAdmin (Easiest & Safest)
-
Backup your database first!
- In phpMyAdmin, select your database
4wdcsa - Click Export → Download full backup as SQL
- Save the file locally for emergency recovery
- In phpMyAdmin, select your database
-
Import the migration script
- Open phpMyAdmin → Select database
4wdcsa - Click Import tab
- Choose the file:
migrations/001_create_audit_logs_table.sql - Click Go to execute
- Open phpMyAdmin → Select database
-
Verify success
- In phpMyAdmin, click on database
4wdcsa - Scroll down and look for
audit_logstable - Click it to verify columns: log_id, user_id, action, status, ip_address, details, created_at
- Check indexes are created (should see 7 keys)
- In phpMyAdmin, click on database
Option 2: MySQL Command Line (If you have CLI access)
# From your terminal/SSH
mysql -u username -p database_name < migrations/001_create_audit_logs_table.sql
# Or paste the SQL directly into MySQL CLI
mysql -u username -p database_name
# Then paste the CREATE TABLE statement
Option 3: Using a MySQL GUI Tool
- Open your MySQL client (Workbench, DataGrip, etc.)
- Open the file
migrations/001_create_audit_logs_table.sql - Execute the script
- Verify the table was created
What Gets Created
Main Table: audit_logs
- log_id (INT) - Primary key, auto-increment
- user_id (INT) - Links to users table
- action (VARCHAR) - Type of action (login_success, payment_failure, etc.)
- status (VARCHAR) - success, failure, or pending
- ip_address (VARCHAR) - Client IP for geo-tracking
- details (JSON) - Flexible metadata (email, reason, amount, etc.)
- created_at (TIMESTAMP) - When it happened
Indexes Created (Performance Optimized)
- Primary key on
log_id - Index on
user_id(find logs by user) - Index on
action(filter by action type) - Index on
status(find failures) - Index on
created_at(time-range queries) - Index on
ip_address(detect brute force) - Composite index on
user_id + created_at(timeline for user)
Foreign Key
- Links to
users.user_idwithON DELETE SET NULL(keeps logs when user is deleted)
Post-Deployment Verification
1. Check Table Exists
SHOW TABLES LIKE 'audit_logs';
Should return 1 result.
2. Verify Structure
DESCRIBE audit_logs;
Should show 7 columns with correct data types.
3. Verify Indexes
SHOW INDEXES FROM audit_logs;
Should show 8 rows (1 primary key + 7 indexes).
4. Test Insert (Optional)
INSERT INTO audit_logs (user_id, action, status, ip_address, details)
VALUES (1, 'login_success', 'success', '192.168.1.1', JSON_OBJECT('email', 'test@example.com'));
SELECT * FROM audit_logs WHERE action = 'login_success';
Should return 1 row with your test data.
How the Code Integrates
Login Attempts (validate_login.php)
// Already integrated! Logs automatically:
AuditLogger::logLogin($email, true); // Success
AuditLogger::logLogin($email, false, 'reason'); // Failure
What Gets Logged
✅ Email/password login success/failure
✅ Google OAuth login success
✅ New user registration via Google
✅ Login failure reasons (invalid password, not verified, etc.)
✅ Client IP address
✅ Timestamp
Data Example
{
"log_id": 1,
"user_id": 5,
"action": "login_success",
"status": "success",
"ip_address": "192.168.1.42",
"details": {"email": "john@example.com"},
"created_at": "2025-12-02 20:30:15"
}
Rollback Plan (If Something Goes Wrong)
Option 1: Drop the Table
DROP TABLE audit_logs;
The application will still work (AuditLogger has error handling).
Option 2: Restore from Backup
- In phpMyAdmin, click Import
- Choose your backup SQL file from earlier
- It will restore the entire database
Performance Considerations
Storage Impact
- Each log entry: ~250-500 bytes (depending on details JSON size)
- 100 logins/day = ~40KB/day = ~15MB/year
- All bookings/payments = ~50MB/year worst case
- Your database size impact: Negligible ✅
Query Performance
- All indexes optimized for common queries
- Foreign key has ON DELETE SET NULL (won't block deletions)
- JSON_EXTRACT queries are fast with proper indexes
- No locks or blocking issues ✅
Monitoring Queries (Run These Later)
See Recent Logins
SELECT user_id, action, status, ip_address, created_at
FROM audit_logs
WHERE action LIKE 'login%'
ORDER BY created_at DESC
LIMIT 20;
Detect Brute Force (failed logins by IP)
SELECT ip_address, COUNT(*) as attempts, MAX(created_at) as latest
FROM audit_logs
WHERE action = 'login_failure'
AND created_at > DATE_SUB(NOW(), INTERVAL 1 HOUR)
GROUP BY ip_address
HAVING attempts > 3
ORDER BY attempts DESC;
See All Actions for a User
SELECT action, status, ip_address, created_at
FROM audit_logs
WHERE user_id = 5
ORDER BY created_at DESC;
After Deployment Steps
- ✅ Run the migration script (create table)
- ✅ Verify table exists and has correct columns
- ✅ Test by logging in to your site (should create audit_logs entry)
- ✅ Check phpMyAdmin → audit_logs table → you should see the login attempt
- ✅ Run one of the monitoring queries above to see the logged data
Questions/Issues?
If the migration fails:
- Check your phpMyAdmin error message
- Verify you have UTF8MB4 character set support (you do ✅)
- Ensure you have permissions to CREATE TABLE (you should ✅)
- Your MySQL version is 8.0.41 (supports JSON perfectly ✅)
The schema is optimized for your existing tables and will integrate seamlessly!