How Claude Code Helped Me Migrate from AWS to DigitalOcean in 4 Hours (With Live Traffic)

At 6 AM this morning, my SaaS was running on AWS. By 10 AM, it was fully migrated to DigitalOcean with zero downtime. The entire migration—infrastructure provisioning, database seeding, SSL setup, DNS cutover, and debugging production issues—was done with Claude Code as my pair programmer.

This is the story of how AI is changing the cloud provider landscape, and why AWS’s moat might not be as deep as they think.

The Problem: $120/month for a Side Project + some unknown $s based on traffic

My URL shortener jo4.io was running on AWS with a fairly standard setup:

  • EC2 t3.small ($15/month)
  • RDS PostgreSQL db.t3.micro ($25/month)
  • ElastiCache Redis ($15/month)
  • ALB + ACM ($20/month)
  • S3 + CloudFront ($10/month)
  • Route53, CloudWatch, misc ($35/month)

Total: ~$120/month for a side project that gets maybe 1000 requests/day + some unknown $s based on traffic.

DigitalOcean equivalent:

  • Droplet s-2vcpu-4gb ($24/month)
  • Managed PostgreSQL ($15/month)
  • Redis in Docker ($0)
  • Spaces CDN ($5/month)
  • Cloudflare (free)

Total: ~$44/month — a 63% cost reduction.

The Old Way: Weeks of Planning

Traditionally, a migration like this would require:

  1. Week 1-2: Document existing infrastructure, create migration plan
  2. Week 3: Set up new infrastructure manually or write Terraform
  3. Week 4: Test, fix issues, test again
  4. Week 5: Migration weekend with planned downtime
  5. Week 6: Monitor and fix post-migration issues

I’ve done migrations like this before. They’re painful, error-prone, and time-consuming.

The Claude Code Way: 4 Hours, Zero Downtime

Here’s what actually happened:

Hour 1: Infrastructure Setup (6:00 – 7:00 AM)

I told Claude: “I want to migrate from AWS to DigitalOcean. Create GitHub Actions workflows to bootstrap the infrastructure.”

Claude generated:

  • DigitalOcean-410-init.yml – Initialize DO project and Spaces
  • DigitalOcean-420-bootstrap.yml – Create droplet and managed PostgreSQL
  • DigitalOcean-430-apply.yml – Configure droplet with Docker, env vars, and deploy scripts

But here’s where it gets interesting. The first run failed with:

psql: error: connection to server failed: sslmode value "sslmode=require" invalid

Claude caught the issue immediately—sslmode=require was being passed as a positional argument instead of using PGSSLMODE=require environment variable. It fixed the workflow and explained why.

Hour 2: Database Migration (7:00 – 8:00 AM)

The database dump from AWS had jo4admin as the owner (my RDS master user). DigitalOcean’s managed PostgreSQL uses doadmin.

The first seed attempt failed with 50+ errors:

psql:/tmp/seed.sql:29: ERROR: role "jo4admin" does not exist

Claude immediately added a fix to the seed workflow:

- name: Fix Owner References
  run: |
    # Replace jo4admin with doadmin (AWS user -> DO user)
    sed -i 's/jo4admin/doadmin/g' /tmp/seed.sql

    # Remove unsupported settings
    sed -i '/SET transaction_timeout/d' /tmp/seed.sql

29 tables, 3000+ rows, migrated cleanly.

Hour 3: SSL and nginx (8:00 – 9:00 AM)

This is where things got tricky. Cloudflare was returning 521 errors because:

  1. Cloudflare’s “Full” SSL mode requires the origin server to have valid SSL on port 443
  2. My droplet only had nginx listening on port 80

Claude generated DigitalOcean-450-ssl.yml that:

  • Creates Cloudflare Origin Certificates via API
  • Deploys them to the droplet
  • Configures nginx with SSL for both frontend and API
  • Sets up proper firewall rules

But then we hit another issue—SPA routing was broken. /admin/users returned a 403 from DO Spaces.

Hour 4: The SPA Routing Bug (9:00 – 10:00 AM)

The nginx config looked correct:

location / {
    proxy_pass https://jo4-assets.nyc3.digitaloceanspaces.com/client/index.html;
}

But requests to /admin/users were being proxied to:

https://jo4-assets.nyc3.digitaloceanspaces.com/client/index.html/admin/users

nginx was appending the request URI to the proxy path! The fix:

location / {
    rewrite ^ /client/index.html break;
    proxy_pass https://jo4-assets.nyc3.digitaloceanspaces.com;
}

By forcing a rewrite before the proxy, all SPA routes correctly serve index.html.

What Made This Possible

1. Claude Knows the Docs

When I questioned whether a doctl command was correct, Claude verified it against the official DigitalOcean documentation. No more Stack Overflow archaeology.

2. Iterative Debugging

Each failed workflow run, Claude would:

  1. Read the logs via gh run view
  2. Identify the root cause
  3. Propose a fix with explanation
  4. Update the workflow

This tight feedback loop is faster than any human could reasonably debug.

3. Cross-Domain Knowledge

The migration touched:

  • GitHub Actions YAML
  • Bash scripting
  • PostgreSQL administration
  • nginx configuration
  • Cloudflare API
  • Docker Compose
  • Vite/React builds
  • Spring Boot environment variables

Claude handled all of these without context switching or documentation lookups.

4. Catching Subtle Bugs

The frontend was calling localhost:8080 instead of the production API. Why? The workflow was setting VITE_API_URL but the codebase used VITE_API_BASE. A one-word difference that would have taken hours to debug manually.

The AWS Moat is Shrinking

AWS has historically maintained its dominance through:

  1. Complexity as a moat – Hard to leave once you’re deeply integrated
  2. Tribal knowledge – Years of experience with AWS-specific patterns
  3. Documentation depth – Knowing where to find the right docs
  4. Migration fear – “What if something breaks?”

AI changes all of this:

  • Complexity becomes navigable – Claude can handle multi-service architectures
  • Tribal knowledge is democratized – The AI has read all the docs
  • Documentation is searchable – Ask and receive
  • Migration becomes iterative – Fix issues in real-time, not post-mortems

The Numbers

Metric Traditional With Claude
Planning 2 weeks 0
Implementation 2 weeks 3 hours
Testing 1 week 30 minutes
Downtime 4-8 hours 0
Post-migration fixes 1 week 1 hour
Total 6 weeks 4 hours

What I Learned

  1. Don’t fear the migration – With AI assistance, changing cloud providers is no longer a multi-week project.

  2. Verify variable names – The VITE_API_URL vs VITE_API_BASE bug cost 20 minutes. Always check that env var names match what your code expects.

  3. nginx proxy_pass is tricky – If your proxy_pass has a URI, nginx will replace the matched location. Use rewrite for SPA fallbacks.

  4. Managed databases have different usersjo4admin (AWS) vs doadmin (DO). Always check owner references in dumps.

  5. Export env vars explicitly – Don’t rely on Vite’s mode loading. export VITE_API_BASE=... before build is bulletproof.

The Future

AWS isn’t going anywhere, and for complex enterprise workloads, it’s still the right choice. But for startups, side projects, and cost-conscious teams?

The switching cost just dropped from “months of planning” to “a Saturday morning.”

And that changes everything.

Have you done a cloud migration with AI assistance? I’d love to hear your experience in the comments!

Building jo4.io – a URL shortener with analytics. Now running on DigitalOcean thanks to Claude.

Leave a Reply